From: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
---|---|
To: | Feng Tian <ftian(at)vitessedata(dot)com> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pretty bad n_distinct estimate, causing HashAgg OOM on TPC-H |
Date: | 2015-06-20 15:49:31 |
Message-ID: | 55858B8B.3080309@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On 06/20/2015 05:29 PM, Feng Tian wrote:
>
> I have not read Jeff's patch, but here is how I think hash agg should work,
>
> Hash agg scan lineitem table, perform aggregation in memory. Once
> workmem is exhausted, it write intermediate state to disk, bucket by
> bucket. When lineitem table is finished, it reads all tuples from one
> bucket back, combining intermediate state and finalize the aggregation.
> I saw a quite extensive discussion on combining aggregation on the
> dev list, so I assume it will be added.
That's not really how the proposed patch works, and the fact that we
don't have a good way to serialize/deserialize the aggregate state etc.
There are also various corner cases how you can end up with writing much
more data than you assumed, but let's discuss that in the thread about
the patch, not here.
regards
--
Tomas Vondra http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2015-06-20 15:52:57 | Re: Extension support for postgres_fdw |
Previous Message | Alvaro Herrera | 2015-06-20 15:36:11 | Re: pretty bad n_distinct estimate, causing HashAgg OOM on TPC-H |