Claudio Freire <klaussfreire(at)gmail(dot)com> wrote:
> On Fri, Sep 9, 2011 at 3:16 PM, Kevin Grittner
> <Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
>> Add together the shared_buffers setting and whatever the OS tells
>> you is used for cache under your normal load. It's usually 75%
>> of RM or higher. (NOTE: This doesn't cause any allocation of
>> RAM; it's a hint to the cost calculations.)
>
> In the manual[0] it says to take into account the number of
> concurrent access to different indices and tables:
Hmm. I suspect that the manual is technically correct, except that
it probably only matters in terms of how many connections will
concurrently be executing long-running queries which might access
large swaths of large indexes. In many environments, there are a
lot of maintenance and small query processes, and only occasional
queries where this setting would matter. I've always had good
results (so far) on the effective assumption that only one such
query will run at a time. (That is probably helped by the fact that
we normally submit jobs which run such queries to a job queue
manager which runs them one at a time...)
This is getting back to that issue of using only enough processes at
one time to keep all the bottleneck resources fully utilized. Some
people tend to assuem that if they throw a few more concurrent
processes into the mix, it'll all get done sooner. There are a
great many benchmarks which show otherwise.
-Kevin