Re: is there a way to firmly cap postgres worker memory consumption?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Steve Kehlet <steve(dot)kehlet(at)gmail(dot)com>
Cc: Amador Alvarez <apradopg(at)gmail(dot)com>, Forums postgresql <pgsql-general(at)postgresql(dot)org>
Subject: Re: is there a way to firmly cap postgres worker memory consumption?
Date: 2014-04-08 19:23:57
Message-ID: 23692.1396985037@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Steve Kehlet <steve(dot)kehlet(at)gmail(dot)com> writes:
>> Did you either profiled or debugged it to see what is going on?

> I would love to learn more about how to do this, to get to the bottom of
> the memory usage. I can google around, or can you suggest any reads?

Once you've got a ulimit in place so that malloc eventually fails with
ENOMEM, the backend process should print a memory context dump on stderr
when it hits that. Make sure your logging setup captures the process
stderr someplace (logging_collector works for this, syslog does not).
Post the dump here when you've got it.

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Sergey Konoplev 2014-04-08 20:01:54 Re: streaming replication and recovery
Previous Message Tom Lane 2014-04-08 19:06:14 Re: is there a way to firmly cap postgres worker memory consumption?