>>> Farhan Husain <russoue(at)gmail(dot)com> wrote:
> Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov wrote:
>> >>> Farhan Husain <russoue(at)gmail(dot)com> wrote:
>> > The machine postgres is running on has 4 GB of RAM.
>>
>> In addition to the other suggestions, you should be sure that
>> effective_cache_size is set to a reasonable value, which would
>> probably be somewhere in the neighborhood of '3GB'.
> The execution time has not improved. I am going to increase the
> shared_buffers now keeping the work_mem same.
Increasing shared_buffers is good, but it has been mentioned that this
will not change the plan, which currently scans and sorts the whole
table for a1. Nobody will be surprised when you report minimal
change, if any. If you have not changed effective_cache_size (be sure
not to confuse this with any of the other configuration values) it
will think you only have 128MB of cache, which will be off by a factor
of about 24 from reality.
Also, I'm going to respectfully differ with some of the other posts on
the best setting for work_mem. Most benchmarks I've run and can
remember seeing posted found best performance for this at somewhere
between 16MB and 32MB. You do have to be careful if you have a large
number of concurrent queries, however, and keep it lower. In most
such cases, though, you're better off using a connection pool to limit
concurrent queries instead.
-Kevin