From: | Ben Chobot <bench(at)silentmedia(dot)com> |
---|---|
To: | Greg Smith <greg(at)2ndquadrant(dot)com> |
Cc: | Alvaro Herrera <alvherre(at)commandprompt(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: pg_buffercache's usage count |
Date: | 2010-02-24 23:01:05 |
Message-ID: | 758E6A58-B60F-47F4-9D2E-14BF1FECFD89@silentmedia.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Feb 24, 2010, at 11:09 AM, Greg Smith wrote:
> Alvaro Herrera wrote:
>> BTW the only reason you don't see buffers having a larger "usage" is
>> that the counters are capped at that value.
>>
>
> Right, the usage count is limited to 5 for no reason besides "that seems like a good number". We keep hoping to come across a data set and application with a repeatable benchmark where most of the data ends up at 5, but there's still a lot of buffer cache churn, to allow testing whether a further increase could be valuable. So far nobody has actually found such a set. If I shrunk shared_buffers on Ben's data I think I could create that situation. As is usually the case, I doubt he has another server with 128GB of RAM hanging around just to run that experiment on though, which has always been the reason why I can't simulate this more easily--systems it's prone to happening on aren't cheap.
Well as it happens we *did* just get our third slony node in today, and it could spend some time doing burn-in experiments if it would be helpful. Unfortunately, I won't be able to drive the same load against it, so I don't know how useful it would be.
From | Date | Subject | |
---|---|---|---|
Next Message | Ben Chobot | 2010-02-24 23:05:55 | Re: how to clear server log |
Previous Message | Martijn van Oosterhout | 2010-02-24 22:22:08 | Performance comparison |