| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Steve Kehlet <steve(dot)kehlet(at)gmail(dot)com> |
| Cc: | Forums postgresql <pgsql-general(at)postgresql(dot)org> |
| Subject: | Re: is there a way to firmly cap postgres worker memory consumption? |
| Date: | 2014-04-09 01:32:16 |
| Message-ID: | 1161.1397007136@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Steve Kehlet <steve(dot)kehlet(at)gmail(dot)com> writes:
> Thank you. For some reason I couldn't get it to trip with "ulimit -d
> 51200", but "ulimit -v 1572864" (1.5GiB) got me this in serverlog. I hope
> this is readable, if not it's also here:
Well, here's the problem:
> ExprContext: 812638208 total in 108 blocks; 183520 free (171
> chunks); 812454688 used
So something involved in expression evaluation is eating memory.
Looking at the query itself, I'd have to bet on this:
> ARRAY_TO_STRING(ARRAY_AGG(MM.ID::CHARACTER VARYING), ',')
My guess is that this aggregation is being done across a lot more rows
than you were expecting, and the resultant array/string therefore eats
lots of memory. You might try replacing that with COUNT(*), or even
better SUM(LENGTH(MM.ID::CHARACTER VARYING)), just to get some definitive
evidence about what the query is asking to compute.
Meanwhile, it seems like ulimit -v would provide the safety valve
you asked for originally. I too am confused about why -d didn't
do it, but as long as you found a variant that works ...
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Sameer Kumar | 2014-04-09 04:53:46 | Re: Remote troubleshooting session connection? |
| Previous Message | Steve Kehlet | 2014-04-08 23:13:28 | Re: is there a way to firmly cap postgres worker memory consumption? |