From: | Thomas Kellerer <spam_eater(at)gmx(dot)net> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: How does Postgres estimate the memory needed for sorting/aggregating |
Date: | 2017-01-25 22:00:11 |
Message-ID: | o6b757$5q5$1@blaine.gmane.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Tomas Vondra schrieb am 25.01.2017 um 22:46:
>> I guess this is based on the column statistics stored in pg_stats, but I
>> am not sure:
>>
>
> It is based on the average length of values in that column, yes.
Thanks for confirming that.
I assume this is taken from pg_stats.avg_width ?
> I'm not sure what you mean by 'dynamically resize'. The above
> decision is pretty much how planner decides whether to use hash
> aggregate or group aggregate. If we estimate that the hash aggregate
> will fit into work_mem, the planner will consider both possibilities.
> If the estimate says hash aggregate would not fit into work_mem,
> we'll only consider group aggregate, because that can work with very
> little memory.
>
> At execution time we'll only use as much memory as actually needed.
> The trouble is that if we under-estimated the amount of memory,
> there's no way back.
The "under-estimation" is what I am referring to with "dynamically resize".
What happens if the planner assumes 100kb but in reality it needs 100MB?
Thomas
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2017-01-25 22:06:44 | Re: How does Postgres estimate the memory needed for sorting/aggregating |
Previous Message | John R Pierce | 2017-01-25 21:47:41 | Re: How does Postgres estimate the memory needed for sorting/aggregating |