From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Jan Wieck <JanWieck(at)Yahoo(dot)com> |
Cc: | Ang Chin Han <angch(at)bytecraft(dot)com(dot)my>, Christopher Browne <cbbrowne(at)acm(dot)org>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Experimental patch for inter-page delay in VACUUM |
Date: | 2003-11-04 16:49:03 |
Message-ID: | 22712.1067964543@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Jan Wieck <JanWieck(at)Yahoo(dot)com> writes:
> That is part of the idea. The whole idea is to issue "physical" writes
> at a fairly steady rate without increasing the number of them
> substantial or interfering with the drives opinion about their order too
> much. I think O_SYNC for random access can be in conflict with write
> reordering.
Good point. But if we issue lots of writes without fsync then we still
have the problem of a write storm when the fsync finally occurs, while
if we fsync too often then we constrain the write order too much. There
will need to be some tuning here.
> How I can see the background writer operating is that he's keeping the
> buffers in the order of the LRU chain(s) clean, because those are the
> buffers that most likely get replaced soon. In my experimental ARC code
> it would traverse the T1 and T2 queues from LRU to MRU, write out n1 and
> n2 dirty buffers (n1+n2 configurable), then fsync all files that have
> been involved in that, nap depending on where he got down the queues (to
> increase the write rate when running low on clean buffers), and do it
> all over again.
You probably need one more knob here: how often to issue the fsyncs.
I'm not convinced "once per outer loop" is a sufficient answer.
Otherwise this is sounding pretty good.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Fabien COELHO | 2003-11-04 17:02:51 | minor suggestion about rule syntax |
Previous Message | Tom Lane | 2003-11-04 16:40:45 | Re: equal() perf tweak |