From: | Scott Carey <scott(at)richrelevance(dot)com> |
---|---|
To: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, Merlin Moncure <mmoncure(at)gmail(dot)com>, postgres performance list <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: The shared buffers challenge |
Date: | 2011-05-27 16:44:30 |
Message-ID: | CA051DED.39EC4%scott@richrelevance.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
So how far do you go? 128MB? 32MB? 4MB?
Anecdotal and an assumption, but I'm pretty confident that on any server
with at least 1GB of dedicated RAM, setting it any lower than 200MB is not
even going to help latency (assuming checkpoint and log configuration is
in the realm of sane, and connections*work_mem is sane).
The defaults have been so small for so long on most platforms, that any
increase over the default generally helps performance -- and in many cases
dramatically. So if more is better, then most users assume that even more
should be better.
But its not so simple, there are drawbacks to a larger buffer and
diminishing returns with larger size. I think listing the drawbacks of a
larger buffer and symptoms that can result would be a big win.
And there is an OS component to it too. You can actually get away with
shared_buffers at 90% of RAM on Solaris. Linux will explode if you try
that (unless recent kernels have fixed its shared memory accounting).
On 5/26/11 8:10 AM, "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
>Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
>
>> So, the challenge is this: I'd like to see repeatable test cases
>> that demonstrate regular performance gains > 20%. Double bonus
>> points for cases that show gains > 50%.
>
>Are you talking throughput, maximum latency, or some other metric?
>
>In our shop the metric we tuned for in reducing shared_buffers was
>getting the number of "fast" queries (which normally run in under a
>millisecond) which would occasionally, in clusters, take over 20
>seconds (and thus be canceled by our web app and present as errors
>to the public) down to zero. While I know there are those who care
>primarily about throughput numbers, that's worthless to me without
>maximum latency information under prolonged load. I'm not talking
>90th percentile latency numbers, either -- if 10% of our web
>requests were timing out the villagers would be coming after us with
>pitchforks and torches.
>
>-Kevin
>
>--
>Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
>To make changes to your subscription:
>http://www.postgresql.org/mailpref/pgsql-performance
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2011-05-27 17:07:09 | Re: The shared buffers challenge |
Previous Message | Merlin Moncure | 2011-05-27 15:23:12 | Re: The shared buffers challenge |