From: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Greg Stark <gsstark(at)mit(dot)edu>, Qingqing Zhou <zhouqq(at)cs(dot)toronto(dot)edu>, ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: sync_file_range() |
Date: | 2006-06-20 08:44:52 |
Message-ID: | 1150793092.2587.134.camel@localhost.localdomain |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, 2006-06-19 at 21:35 -0400, Tom Lane wrote:
> Greg Stark <gsstark(at)mit(dot)edu> writes:
> > Come to think of it I wonder whether there's anything to be gained by using
> > smaller files for tables. Instead of 1G files maybe 256M files or something
> > like that to reduce the hit of fsyncing a file.
> sync_file_range() is not that exactly, but since it lets you request
> syncing and then go back and wait for the syncs later, we could get the
> desired effect with two passes over the file list. (If the file list
> is longer than our allowed number of open files, though, the extra
> opens/closes could hurt.)
So we would use the async properties of sync, but not the file range
support? Sounds like it could help with multiple filesystems.
> Indeed, I've been wondering lately if we shouldn't resurrect
> LET_OS_MANAGE_FILESIZE and make that the default on systems with
> largefile support. If nothing else it would cut down on open/close
> overhead on very large relations.
Agreed.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2006-06-20 08:52:34 | Re: shall we have a TRACE_MEMORY mode |
Previous Message | Satoshi Nagayasu | 2006-06-20 08:42:59 | Re: PAM auth |