From: | Tony Dare <wadedare4703(at)comcast(dot)net> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Running out of memory at vacuum |
Date: | 2013-05-16 21:02:41 |
Message-ID: | 51954971.6010904@comcast.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 05/16/2013 07:13 AM, Ioana Danes wrote:
> Hi Jeff,
>
> Yes stop/start of the application server does close all the
> connections to the database.
> Lately I did restart postgres too everytime that happened. It did
> happen in the past, last year sometime when I tried just to close the
> app and it was not enough. I might mix up different scenarios thought
> because I did have another issue when the by mistake the max
> connections were set to 1000 and run out of memory for good reason. So
> it might have been happened in that case not now.
>
> I will keep you updated.
>
> Thank you,
> Ioana
> ------------------------------------------------------------------------
> *From:* Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
> *To:* Ioana Danes <ioanasoftware(at)yahoo(dot)ca>
> *Cc:* PostgreSQL General <pgsql-general(at)postgresql(dot)org>
> *Sent:* Thursday, May 16, 2013 9:56:07 AM
> *Subject:* Re: [GENERAL] Running out of memory at vacuum
>
> On Thu, May 16, 2013 at 6:35 AM, Ioana Danes <ioanasoftware(at)yahoo(dot)ca
> <mailto:ioanasoftware(at)yahoo(dot)ca>> wrote:
>
> Hi Jeff,
>
> On Tuesday, May 14, 2013, Ioana Danes wrote:
>
>
> The fix is to restart postgres ... If I only close the
> connections the problem is still these so I need to restart
> postgres.
>
>
> How are you closing the connections?
>
> I restart the application server. The problem is that the max_idle
> connections was set to 1000 on jdbc connection so once the spike
> happened the app would run with 300 connections and 250 of them or
> so IDLE for most of the time. I am fixing that
>
>
>
> Hi Ionana, thanks for the responses. Does restarting the app server
> successfully cause all of those connections to terminate? If so, and
> yet you still have memory problems, then there is still the mystery of
> where the memory is going.
>
> Cheers,
>
> Jeff
>
>
>
>
From http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
work_mem maintainance_work_mem
<http://www.postgresql.org/docs/current/static/runtime-config-resource.html#GUC-WORK-MEM>
If you do a lot of complex sorts, and have a lot of memory, then
increasing the work_mem parameter allows PostgreSQL to do larger
in-memory sorts which, unsurprisingly, will be faster than disk-based
equivalents.
This size is applied to each and every sort done by each user, and
complex queries can use multiple working memory sort buffers. Set it to
50MB, and have 30 users submitting queries, and you are soon using 1.5GB
of real memory. Furthermore, if a query involves doing merge sorts of 8
tables, that requires 8 times work_mem. You need to consider what you
set max_connections to in order to size this parameter correctly. This
is a setting where data warehouse systems, where users are submitting
very large queries, can readily make use of many gigabytes of memory.
maintenance_work_mem is used for operations like vacuum. Using extremely
large values here doesn't help very much, and *because you essentially
need to reserve that memory* for when vacuum kicks in, takes it away
from more useful purposes. Something in the 256MB range has anecdotally
been a reasonably large setting here.
From | Date | Subject | |
---|---|---|---|
Next Message | Joe Conway | 2013-05-16 21:40:45 | Re: problem with lost connection while running long PL/R query |
Previous Message | Zach Seaman | 2013-05-16 20:54:50 | Re: Regarding Postgres Plus Associate Certification |