From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Bill Moran <wmoran(at)potentialtech(dot)com> |
Cc: | Alan McKay <alan(dot)mckay(at)gmail(dot)com>, Postgres General Postgres General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: limiting query time and/or RAM |
Date: | 2009-09-17 19:35:04 |
Message-ID: | dcc563d10909171235r73213d19ia0acdfbed2f35fdf@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Sep 17, 2009 at 1:31 PM, Bill Moran <wmoran(at)potentialtech(dot)com> wrote:
> In response to Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>:
>
>> On Thu, Sep 17, 2009 at 12:56 PM, Alan McKay <alan(dot)mckay(at)gmail(dot)com> wrote:
>> > Is there any way to limit a query to a certain amount of RAM and / or
>> > certain runtime?
>> >
>> > i.e. automatically kill it if it exceeds either boundary?
>> >
>> > We've finally narrowed down our system crashes and have a smoking gun,
>> > but no way to fix it in the immediate term. This sort of limit would
>> > really help us.
>>
>> Generally speaking work_mem limits ram used. What are your
>> non-default postgresql.conf settings?
>
> work_mem limits memory usage _per_sort_.
>
> A big query can easily have many sorts. Each sort will be limited to
> work_mem memory usage, but the total could be much higher.
>
> The only way I can think is to set a per-process limit in the OS and allow
> the OS to kill a process when it gets out of hand. Not ideal, though.
True, but with a work_mem of 2M, I can't imagine having enough sorting
going on to need 4G of ram. (2000 sorts? That's a lot) I'm betting
the OP was looking at top and misunderstanding what the numbers mean,
which is pretty common really.
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2009-09-17 19:36:27 | Re: limiting query time and/or RAM |
Previous Message | Bill Moran | 2009-09-17 19:31:12 | Re: limiting query time and/or RAM |