Re: is there a way to firmly cap postgres worker memory consumption?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Steve Kehlet <steve(dot)kehlet(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: is there a way to firmly cap postgres worker memory consumption?
Date: 2014-04-08 19:06:14
Message-ID: 23268.1396983974@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Steve Kehlet <steve(dot)kehlet(at)gmail(dot)com> writes:
> But for now, the devs are asking me for some way to put a cap on a postgres
> query's total memory consumption. I'm familiar with the available settings
> on the "Resource Consumption" docs (and you can see my settings in my gist
> above, including work_mem turned way down to 1MB for testing), but it seems
> like there are things like Materialize that remain uncappable, since
> they're not constrained by the shared_buffers and work_mem limits.

Materialize nodes should honor the work_mem limit. That's per node of
course, but you only have half a dozen of them in this query, so I doubt
that's where the problem is. You mentioned array_agg ... how are you
using that exactly, and how large a result are you expecting it to create?
If you had an array_agg being used in a HashAggregate plan node, I'd
figure for sure that was the source of your problem, but the explain
output shows it's a GroupAggregate so there should only be one array_agg
active at a time.

Anyway, what I would try doing is starting the postmaster with a "ulimit
-d" value corresponding to the max per-process data segment size you want.
Something in the range of 50-100MB would probably be reasonable if your
queries aren't too complex.

regards, tom lane

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2014-04-08 19:23:57 Re: is there a way to firmly cap postgres worker memory consumption?
Previous Message Steve Kehlet 2014-04-08 18:58:27 Re: is there a way to firmly cap postgres worker memory consumption?