From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Jan Wieck <JanWieck(at)Yahoo(dot)com> |
Cc: | Ang Chin Han <angch(at)bytecraft(dot)com(dot)my>, Christopher Browne <cbbrowne(at)acm(dot)org>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Experimental patch for inter-page delay in VACUUM |
Date: | 2003-11-04 15:58:46 |
Message-ID: | 22408.1067961526@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Jan Wieck <JanWieck(at)Yahoo(dot)com> writes:
> Tom Lane wrote:
>> I have never been happy with the fact that we use sync(2) at all.
> Sure does it do too much. But together with the other layer of
> indirection, the virtual file descriptor pool, what is the exact
> guaranteed behaviour of
> write(); close(); open(); fsync();
> cross platform?
That isn't guaranteed, which is why we have to use sync() at the
moment. To go over to fsync or O_SYNC we'd need more control over which
file descriptors are used to issue writes. Which is why I was thinking
about moving the writes to a centralized writer process.
>> Actually, once you build it this way, you could make all writes
>> synchronous (open the files O_SYNC) so that there is never any need for
>> explicit fsync at checkpoint time.
> Yes, but then the configuration leans more towards "take over the RAM"
Why? The idea is to try to issue writes at a fairly steady rate, which
strikes me as much better than the current behavior. I don't see why it
would force you to have large numbers of buffers available. You'd want
a few thousand, no doubt, but that's not a large number.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jan Wieck | 2003-11-04 16:28:10 | Re: Experimental patch for inter-page delay in VACUUM |
Previous Message | Andrew Dunstan | 2003-11-04 15:51:02 | Re: Experimental patch for inter-page delay in VACUUM |