From: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Gavin Sherry <swm(at)linuxworld(dot)com(dot)au>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Bgwriter behavior |
Date: | 2004-12-21 15:24:42 |
Message-ID: | 200412211524.iBLFOhE26988@candle.pha.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-patches |
Tom Lane wrote:
> Gavin Sherry <swm(at)linuxworld(dot)com(dot)au> writes:
> > I was also thinking of benchmarking the effect of changing the algorithm
> > in StrategyDirtyBufferList(): currently, for each iteration of the loop we
> > read a buffer from each of T1 and T2. I was wondering what effect reading
> > T1 first then T2 and vice versa would have on performance.
>
> Looking at StrategyGetBuffer, it definitely seems like a good idea to
> try to keep the bottom end of both T1 and T2 lists clean. But we should
> work at T1 a bit harder.
>
> The insight I take away from today's discussion is that there are two
> separate goals here: try to keep backends that acquire a buffer via
> StrategyGetBuffer from being fed a dirty buffer they have to write,
> and try to keep the next upcoming checkpoint from having too much work
> to do. Those are both laudable goals but I hadn't really seen before
> that they may require different strategies to achieve. I'm liking the
> idea that bgwriter should alternate between doing writes in pursuit of
> the one goal and doing writes in pursuit of the other.
It seems we have added a new limitation to bgwriter by not doing a full
scan. With a full scan we could easily grab the first X pages starting
from the end of the LRU list and write them. By not scanning the full
list we are opening the possibility of not seeing some of the front-most
LRU dirty pages. And the full scan was removed so we can run bgwriter
more frequently, but we might end up with other problems.
I have a new proposal. The idea is to cause bgwriter to increase its
frequency based on how quickly it finds dirty pages.
First, we remove the GUC bgwriter_maxpages because I don't see a good
way to set a default for that. A default value needs to be based on a
percentage of the full buffer cache size. Second, we make
bgwriter_percent cause the bgwriter to stop its scan once it has found a
number of dirty buffers that matches X% of the buffer cache size. So,
if it is set to 5%, the bgwriter scan stops once it find enough dirty
buffers to equal 5% of the buffer cache size.
Bgwriter continues to scan starting from the end of the LRU list, just
like it does now.
Now, to control the bgwriter frequency we multiply the percent of the
list it had to span by the bgwriter_delay value to determine when to run
bgwriter next. For example, if you find enough dirty pages by looking
at only 10% of the buffer cache you multiple 10% (0.10) * bgwriter_delay
and that is when you run next. If you have to scan 50%, bgwriter runs
next at 50% (0.50) * bgwriter_delay, and if it has to scan the entire
list it is 100% (1.00) * bgwriter_delay.
What this does is to cause bgwriter to run more frequently when there
are a lot of dirty buffers on the end of the LRU _and_ when the bgwriter
scan will be quick. When there are few writes, bgwriter will run less
frequently but will write dirty buffers nearer to the head of the LRU.
--
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-12-21 15:26:48 | Re: RC2 and open issues |
Previous Message | Kenneth Marshall | 2004-12-21 13:09:50 | Re: RC2 and open issues |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-12-21 15:44:48 | Re: Bgwriter behavior |
Previous Message | Kenneth Marshall | 2004-12-21 13:09:50 | Re: RC2 and open issues |