From: | Kevin Grittner <kgrittn(at)ymail(dot)com> |
---|---|
To: | "G(dot)Bakalarski(at)icm(dot)edu(dot)pl" <G(dot)Bakalarski(at)icm(dot)edu(dot)pl> |
Cc: | "pgsql-bugs(at)postgresql(dot)org" <pgsql-bugs(at)postgresql(dot)org> |
Subject: | Re: BUG #8318: memory leak during CREATE TEMPORARY TABLE |
Date: | 2013-07-31 15:54:27 |
Message-ID: | 1375286067.18051.YahooMailNeo@web162902.mail.bf1.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
"G(dot)Bakalarski(at)icm(dot)edu(dot)pl" <G(dot)Bakalarski(at)icm(dot)edu(dot)pl> wrote:
> Kevin Grittner <kgrittn(at)ymail(dot)com> wrote:
>> "g(dot)bakalarski(at)icm(dot)edu(dot)pl" <g(dot)bakalarski(at)icm(dot)edu(dot)pl> wrote:
>>
>>> When doing these kind of statements
>>
>>> execute <unnamed>: CREATE TEMPORARY TABLE [...]
>>
>>> After few days [...] my huge server (192GB of RAM), gets stucked
>>> due to no memory available (usually %commit is around 103-105%),
>>> linux OOM killer goes into action - it usually kills one of
>>> postgres processes - postmaster restart all databases) and after
>>> all linux has again some 100GB of not commited memory
>>
>> What do you have as settings for temp_buffers and max_connections?
>>
> temp_buffers = 16GB
> max_connections = 250
temp_buffers is per-connection, and any space allocated for this
purpose by a connection is not released until the connection is
closed. So you have configured your server to acquire and hold up
to 3.9 TB of RAM for this purpose.
> (with real number of connections about 120 and not going above)
OK, so you can only get to 1.9 TB of RAM used for temp_buffers with
120 connections, but that's still way more than the 192 GB you
actually have.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2013-07-31 16:03:07 | Re: 9.3beta2: Failure to pg_upgrade |
Previous Message | Klaus Ita | 2013-07-31 14:59:50 | Re: Recovery_target_time misinterpreted? |