From: | Jeff Trout <jeff(at)jefftrout(dot)com> |
---|---|
To: | "Marinos J(dot) Yannikos" <mjy(at)geizhals(dot)at> |
Cc: | pgsql-performance(at)postgresql(dot)org, josh(at)agliodbs(dot)com |
Subject: | Re: optimization ideas for frequent, large(ish) updates |
Date: | 2004-02-15 17:20:38 |
Message-ID: | 460CEC02-5FDB-11D8-8142-000D9366F0C4@jefftrout.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Feb 14, 2004, at 9:02 PM, Marinos J. Yannikos wrote:
> Josh Berkus wrote:
>
>> 800MB for sort mem? Are you sure you typed that correctly? You
>> must be counting on not having a lot of concurrent queries. It sure
>> will speed up index updating, though!
>
> 800MB is correct, yes... There are usually only 10-30 postgres
> processes active (imagine 5-10 people working on the web front-end
> while cron jobs access the db occasionally). Very few queries can use
> such large amounts of memory for sorting, but they do exist.
>
Remember that it is going to allocate 800MB per sort. It is not "you
can allocate up to 800MB, so if you need 1 meg, use one meg". Some
queries may end up having a few sort steps.
In terms of sort mem it is best to set a system default to a nice good
value for most queries. and then in your reporting queries or other
ones set sort_mem for that session (set sort_mem = 800000) then only
that session will use the looney sort_mem
It would be interesting to know if your machine is swapping.
--
Jeff Trout <jeff(at)jefftrout(dot)com>
http://www.jefftrout.com/
http://www.stuarthamm.net/
From | Date | Subject | |
---|---|---|---|
Next Message | Marinos J. Yannikos | 2004-02-16 02:53:15 | Re: optimization ideas for frequent, large(ish) updates |
Previous Message | Christopher Kings-Lynne | 2004-02-15 04:51:41 | Re: optimization ideas for frequent, large(ish) updates |