From: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Rod Taylor <pg(at)rbt(dot)ca>, PostgreSQL Development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Large Scale Aggregation (HashAgg Enhancement) |
Date: | 2006-01-17 07:05:29 |
Message-ID: | 1137481529.3180.254.camel@localhost.localdomain |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, 2006-01-16 at 20:02 -0500, Tom Lane wrote:
> Simon Riggs <simon(at)2ndquadrant(dot)com> writes:
> > Sure hash table is dynamic, but we read all inner rows to create the
> > hash table (nodeHash) before we get the outer rows (nodeHJ).
>
> But our idea of the number of batches needed can change during that
> process, resulting in some inner tuples being initially assigned to the
> wrong temp file. This would also be true for hashagg.
So we correct that before we start reading the outer table.
> > Why would we continue to dynamically build the hash table after the
> > start of the outer scan?
>
> The number of tuples written to a temp file might exceed what we want to
> hold in memory; we won't detect this until the batch is read back in,
> and in that case we have to split the batch at that time. For hashagg
> this point would apply to the aggregate states not the input tuples, but
> it's still a live problem (especially if the aggregate states aren't
> fixed-size values ... consider a "concat" aggregate for instance).
OK, I see what you mean. Sounds like we should have a new definition for
Aggregates, "Sort Insensitive" that allows them to work when the input
ordering does not effect the result, since that case can be optimised
much better when using HashAgg. Since we know that applies to the common
cases of SUM, AVG etc this will certainly help people.
For sort-sensitive aggregates sounds like we either:
1. Write to a single file, while we remember the start offset of the
first row of each batch.
2. Write to multiple files, adding a globally incrementing sequenceid.
Batches are then resorted on the sequenceid before processing.
3. We give up, delete the existing batches and restart the scan from the
beginning of the outer table.
Sounds like (1) is best, since the overflow just becomes a SortedAgg.
But all of them sound ugly.
Best Regards, Simon Riggs
From | Date | Subject | |
---|---|---|---|
Next Message | R, Rajesh (STSD) | 2006-01-17 07:07:17 | Re: [PATCH] Better way to check for getaddrinfo function. |
Previous Message | Tom Lane | 2006-01-17 05:29:18 | Re: Large Scale Aggregation (HashAgg Enhancement) |