Re: Limit of bgwriter_lru_maxpages of max. 1000?

From: Greg Smith <gsmith(at)gregsmith(dot)com>
To: Gerhard Wiesinger <lists(at)wiesinger(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Limit of bgwriter_lru_maxpages of max. 1000?
Date: 2009-10-02 20:19:17
Message-ID: alpine.GSO.2.01.0910021610130.13300@westnet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, 2 Oct 2009, Gerhard Wiesinger wrote:

> In my experience flushing I/O as soon as possible is the best solution.

That what everyone assumes, but detailed benchmarks of PostgreSQL don't
actually support that view given how the database operates. We went
through a lot of work in 8.3 related to how to optimize the database as a
system that disproved some of the theories about what would work well
here.

What happens if you're really aggressive about writing blocks out as soon
as they're dirty is that you waste a lot of I/O on things that just get
dirty again later. Since checkpoint time is the only period where blocks
*must* get written, the approach that worked the best for reducing
checkpoint spikes was to spread the checkpoint writes out over a very wide
period. The only remaining work that made sense for the background writer
was to tightly focus the background writer its I/O on blocks that are
about to be evicted due to low usage no matter what.

In most cases where people think they need more I/O from the background
writer, what you actually want is to increase checkpoint_segments,
checkpoint_completion_target, and checkpoint_timeout in order to spread
the checkpoint I/O out over a longer period. The stats you provided
suggest this is working exactly as intended.

As far as work to improve the status quo, IMHO the next thing to improve
is getting the fsync calls made at checkpoint time more intelligently
spread over the whole period. That's got a better payback than trying to
make the background writer more aggressive, which is basically a doomed
cause.

> So I'd like to do some tests with new statistics. Any fast way to reset
> statistics for all databases for pg_stat_pgwriter?

No, that's an open TODO item I keep meaning to fix; we lost that
capability at one point. What I do is create a table that looks just like
it, but with a time stamp, and save snapshots to that table. Then a view
on top can generate just the deltas between two samples to show activity
during that time. It's handy to have such a history anyway.

--
* Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Greg Smith 2009-10-02 20:29:06 Re: PostgreSQL reads each 8k block - no larger blocks are used - even on sequential scans
Previous Message David Fetter 2009-10-02 20:16:17 Re: Time Management - Training Seminar in Cape Town