From: | Greg Stark <stark(at)mit(dot)edu> |
---|---|
To: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Odd out of memory problem. |
Date: | 2012-03-26 16:20:51 |
Message-ID: | CAM-w4HOzEPH+i28=2PsZeLne4B-RPA-c0t8A4VYwkdTNTubwWw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Mar 26, 2012 at 4:03 PM, Andrew Dunstan <andrew(at)dunslane(dot)net> wrote:
> TupleHashTable: 619175960 total in 95 blocks; 821528 free
> (331 chunks); 618354432 used
I think the plan you showed isn't the plan that's running out of
memory. I think it's running out of memory because it's using a Hash
Aggregate and underestimating the number of distinct elements. I would
set enable_hash_aggregate=false for this query (not for the whole
server, hash aggregates are quite useful in general).
This is a long-standing problem with hash aggregates. The problem is
that we don't have a clever way to fall back to a non-hash-aggregate
if the original estimated memory usage turns out to be way off. Just
spilling to disk the way hash joins do would be horrendously
expensive, we think. If we had an approach to doing so it would be
very useful because simply running out of memory is kind of rude.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Dunstan | 2012-03-26 16:26:19 | Re: Odd out of memory problem. |
Previous Message | Tom Lane | 2012-03-26 16:11:25 | Re: Odd out of memory problem. |