From: | "Zeugswetter Andreas DAZ SD" <ZeugswetterA(at)spardat(dot)at> |
---|---|
To: | "Simon Riggs" <simon(at)2ndquadrant(dot)com>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "Neil Conway" <neilc(at)samurai(dot)com>, "PostgreSQL-development" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: bgwriter changes |
Date: | 2004-12-15 10:39:44 |
Message-ID: | 46C15C39FEB2C44BA555E356FBCD6FA40184D273@m0114.s-mxs.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> > > and stops early when eighter maxpages is reached or bgwriter_percent
> > > pages are scanned ?
> >
> > Only if you redefine the meaning of bgwriter_percent. At present it's
> > defined by reference to the total number of dirty pages, and that can't
> > be known without collecting them all.
> >
> > If it were, say, a percentage of the total length of the T1/T2 lists,
> > then we'd have some chance of stopping the scan early.
>
> ...which was exactly what was proposed for option (3).
But the benchmark run was with bgwriter_percent 100. I wanted to point out,
that I think 100% is too much (writes hot pages multiple times between checkpoints).
In the benchmark, bgwriter obviously falls behind, the delay is too long. But if you
reduce the delay you will start to see what I mean.
Actually I think what is really needed is a max number of pages we want dirty
during checkpoint. Since that would again require scanning all pages, the next best
definition would imho be stop at a percentage (or a number of pages short) of total T1/T2.
Then you can still calculate a worst case IO for checkpoint (assume that all hot pages are dirty)
Andreas
From | Date | Subject | |
---|---|---|---|
Next Message | simon | 2004-12-15 13:24:02 | Re: Re: bgwriter changes |
Previous Message | Simon Riggs | 2004-12-15 09:38:26 | Re: bgwriter changes |