From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Marinos J(dot) Yannikos" <mjy(at)geizhals(dot)at> |
Cc: | Jeff Trout <jeff(at)jefftrout(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: optimization ideas for frequent, large(ish) updates |
Date: | 2004-02-16 03:28:48 |
Message-ID: | 17527.1076902128@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
"Marinos J. Yannikos" <mjy(at)geizhals(dot)at> writes:
> Jeff Trout wrote:
>> Remember that it is going to allocate 800MB per sort.
> I didn't know that it always allocates the full amount of memory
> specificed in the configuration
It doesn't ... but it could use *up to* that much before starting to
spill to disk. If you are certain your sorts won't use that much,
then you could set the limit lower, hm?
Also keep in mind that sort_mem controls hash table size as well as sort
size. The hashtable code is not nearly as accurate as the sort code
about honoring the specified limit exactly. So you really oughta figure
that you could need some multiple of sort_mem per active backend.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | David Teran | 2004-02-16 16:51:37 | select max(id) from aTable is very slow |
Previous Message | Marinos J. Yannikos | 2004-02-16 02:53:15 | Re: optimization ideas for frequent, large(ish) updates |