Rohan Malhotra <yourbuddyrohan(at)gmail(dot)com> wrote:
First off, for a one-row result with too many values to fit on one
line, you might want to use this in psql:
\x on
More importantly, you seem to be misinterpreting the numbers.
You've allocated 2,205,969,940 buffers. Of those allocations, the
allocating backend had to first write a dirty buffer to free up a
buffer to use 1,663,650 times. That's pretty small as a percentage
of allocations, but since it's larger than the other causes of dirty
buffer writes (699,136 during checkpoints and 581,839 by the
background writer), I would be tempted to make the background writer
a little more aggressive. Assuming you're currently at the defaults
for these, perhaps:
bgwriter_lru_maxpages = 200
bgwriter_lru_multiplier = 4
This may (or may not) increase the physical writes on your system,
so you want to closely monitor the impact of the change in terms of
whatever metrics matter most to you. For example, in our shop, we
tend to tune our big databases which back a website such that we get
zero "write storms" which cause delays of 20 seconds or more on
queries which rormally run in less than a millisecond. Your
concerns may be different.
For more detailed treatment of the issue look for posts by Greg
Smith; or better yet, buy his book:
http://www.postgresql.org/docs/books/
-Kevin