From: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com> |
---|---|
To: | bricklen <bricklen(at)gmail(dot)com> |
Cc: | Jeison Bedoya <jeisonb(at)audifarma(dot)com(dot)co>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: to many locks held |
Date: | 2013-07-31 05:18:21 |
Message-ID: | CAB7nPqQaZ9TaMn6cSMc9GwCWDNcBS=pv=6=r715+vN9YPNO+xw@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Jul 30, 2013 at 11:48 PM, bricklen <bricklen(at)gmail(dot)com> wrote:
> On Tue, Jul 30, 2013 at 3:52 AM, Jeison Bedoya <jeisonb(at)audifarma(dot)com(dot)co>wrote:
>
memory ram: 128 GB
>> cores: 32
>>
>> max_connections: 900
>>
>
> I would say you might be better off using a connection pooler if you need
> this many connections.
>
Yeah that's a lot. pgbouncer might be a good option in your case.
work_mem = 1024MB
>>
>
> work_mem is pretty high. It would make sense in a data warehouse-type
> environment, but with a max of 900 connections, that can get used up in a
> hurry. Do you find your queries regularly spilling sorts to disk (something
> like "External merge Disk" in your EXPLAIN ANALYZE plans)?
>
work_mem is a per-operation setting for sort/hash operations. So in your
case you might finish with a maximum of 900GB of memory allocated based on
the maximum number of sessions that can run in parallel on your server.
Simply reduce the value of work_mem to something your server can manage and
you should be able to solve your problems of OOM.
--
Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Tasos Petalas | 2013-07-31 11:58:53 | PG performance issues related to storage I/O waits |
Previous Message | bricklen | 2013-07-30 14:48:08 | Re: to many locks held |