From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | John R Pierce <pierce(at)hogranch(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Out of memory on SELECT in 8.3.5 |
Date: | 2009-02-09 09:28:01 |
Message-ID: | dcc563d10902090128u1909c686h989af13692df9e74@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, Feb 9, 2009 at 2:17 AM, John R Pierce <pierce(at)hogranch(dot)com> wrote:
> Matt Magoffin wrote:
>>
>> We have 100+ postgres processes running, so for an individual process,
>> could the 1024 file limit be doing anything to this query? Or would I see
>> an explicit error message regarding this condition?
>>
>>
>
> with 100 concurrent postgres connections, if they all did something
> requiring large amounts of work_mem, you could allocate 100 * 125MB (I
> believe thats what you said it was set to?) which is like 12GB :-O
>
> in fact a single query thats doing multiple sorts of large datasets for a
> messy join (or other similar activity) can involve several instances of
> workmem. multiply that by 100 queries, and ouch.
>
> have you considered using a connection pool to reduce the postgres process
> count?
No matter what I am pretty conservative with work_mem for these
reasons. Plus, I tested most of our queries and raising work_mem
above 16Meg had no real positive effect on most queries. If I have a
single reporting query that can use work_mem over that I set it and
run that query by itself (from things like cron jobs) rather than just
leaving work_mem really high. High work_mem is a bit of a foot gun.
From | Date | Subject | |
---|---|---|---|
Next Message | Herouth Maoz | 2009-02-09 12:50:41 | Re: Slow update |
Previous Message | John R Pierce | 2009-02-09 09:17:40 | Re: Out of memory on SELECT in 8.3.5 |