Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
> So, the challenge is this: I'd like to see repeatable test cases
> that demonstrate regular performance gains > 20%. Double bonus
> points for cases that show gains > 50%.
Are you talking throughput, maximum latency, or some other metric?
In our shop the metric we tuned for in reducing shared_buffers was
getting the number of "fast" queries (which normally run in under a
millisecond) which would occasionally, in clusters, take over 20
seconds (and thus be canceled by our web app and present as errors
to the public) down to zero. While I know there are those who care
primarily about throughput numbers, that's worthless to me without
maximum latency information under prolonged load. I'm not talking
90th percentile latency numbers, either -- if 10% of our web
requests were timing out the villagers would be coming after us with
pitchforks and torches.
-Kevin