From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Radoslav Nedyalkov <rnedyalkov(at)gmail(dot)com> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: how to limit statement memory allocation |
Date: | 2021-03-09 18:13:49 |
Message-ID: | 2036985.1615313629@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Radoslav Nedyalkov <rnedyalkov(at)gmail(dot)com> writes:
> On Tue, Mar 9, 2021 at 6:53 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> The only thing I'm aware of that could consume unbounded memory
>> on the server side is hash aggregation. (v13 has improved that
>> situation, but I'm guessing you are running some older version.)
>> The planner won't choose hash aggregation if it estimates that
>> the hash table would need to exceed work_mem, but its estimate
>> could be wrong.
> Query and plan attached.
Yeah, so, this looks suspicious:
-> HashAggregate (cost=181320662.52..181323190.52 rows=252800 width=16)
Group Key: gsm.merchant_id, cer.date
-> Merge Left Join (cost=161250580.17..170174828.82 rows=891666696 width=71)
How accurate is that estimate for the number of merchant_id/date
groups? Is the estimate for the size of the input join reasonable?
Assuming this is the problem, a brute-force fix could be to turn off
enable_hashagg. But it'd be better to try to get the planner's
estimates more in line with reality.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Vincent Veyron | 2021-03-09 20:28:21 | unexpected character used as group separator by to_char |
Previous Message | Radoslav Nedyalkov | 2021-03-09 17:14:57 | Re: how to limit statement memory allocation |