From: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
---|---|
To: | Richard Huxton <dev(at)archonet(dot)com> |
Cc: | josh(at)agliodbs(dot)com, pgsql-hackers(at)postgresql(dot)org, testperf-general(at)pgfoundry(dot)org |
Subject: | Re: [Testperf-general] BufferSync and bgwriter |
Date: | 2004-12-20 09:40:06 |
Message-ID: | 1103535606.2893.172.camel@localhost.localdomain |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, 2004-12-16 at 17:54, Richard Huxton wrote:
> Josh Berkus wrote:
> >>Clearly, OSDL-DBT2 is not a real world test! That is its benefit, since
> >>it is heavily instrumented and we are able to re-run it many times
> >>without different parameter settings. The application is well known and
> >>doesn't suffer that badly from factors that would allow certain effects
> >>to be swamped. If it had too much randomness or variation, it would be
> >>difficult to interpret.
> >
> >
> > I don't think you followed me. The issue is that for parameters designed to
> > "smooth out spikes" like bgwriter and vacuum delay, it helps to have really
> > bad spikes to begin with. There's a possibility that the parameters (and
> > calculations) that work well for for a "steady-state" OLTP application are
> > actually bad for an application with much more erratic usage, just as a high
> > sort_mem is good for DSS and bad for OLTP.
>
> I'm a little concerned that in an erratic, or even just a changing
> environment there isn't going to be any set of values that are "correct".
>
> If I've got this right, the behaviour we're trying to get is:
> 1. Starting from the oldest dirty block,
> 2. Write as many dirty blocks as you can, but don't...
> 3. Re-write frequently used blocks too much (wasteful)
>
> So, can we not just keep track of two numbers:
> 1. Change in the number of dirty blocks this time vs last
> 2. Number of re-writes we perform (count collisions in a hash or
> similar - doesn't need to be perfect).
>
> If #1 is increasing, then we need to become more active (reduce
> bgwriter_delay, increase bgwriter_maxpages).
> If #2 starts to go up, or goes past some threshold then we reduce
> activity (increase bgwriter_delay, decrease bgwriter_maxpages).
> If of the last N blocks written, C have been collisions then assume
> we've run out of low-activity blocks to write, stop and sleep.
>
> This has a downside that the figures will never be completely accurate,
> but has the advantage that it will automatically track activity.
>
> I'm clearly beyond my technical knowledge here, so if I haven't
> understood / it's impractical / will never work, then don't be afraid to
> step up and let me know. If it helps, you could always think of me as an
> idiot savant who failed his savant exams :-)
Richard,
I like your ideas very much.
For 8.1 or beyond, it seems clear to me that a self-adapting bgwriter
with no/few parameters is the way forward.
My first step will be to instrument the bgwriter, so we have more input
about the dynamic behaviour of the ARC lists and their effect. Then use
that information to trial an adaptive mechanism along the general lines
you suggest.
--
Best Regards, Simon Riggs
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2004-12-20 09:47:05 | Re: bgwriter changes |
Previous Message | Simon Riggs | 2004-12-20 08:31:26 | Re: Shared row locking |