From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Greg Stark <gsstark(at)mit(dot)edu> |
Cc: | Simon Riggs <simon(at)2ndquadrant(dot)com>, Rod Taylor <pg(at)rbt(dot)ca>, PostgreSQL Development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Large Scale Aggregation (HashAgg Enhancement) |
Date: | 2006-01-17 05:29:18 |
Message-ID: | 24575.1137475758@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Greg Stark <gsstark(at)mit(dot)edu> writes:
> For a hash aggregate would it be possible to rescan the original table
> instead of spilling to temporary files?
Sure, but the possible performance gain is finite and the possible
performance loss is not. The "original table" could be an extremely
expensive join. We'd like to think that the planner gets the input size
estimate approximately right and so the amount of extra I/O caused by
hash table resizing should normally be minimal. The cases where it is
not right are *especially* not likely to be a trivial table scan as you
are supposing.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2006-01-17 07:05:29 | Re: Large Scale Aggregation (HashAgg Enhancement) |
Previous Message | Tom Lane | 2006-01-17 05:15:02 | Re: [HACKERS] source documentation tool doxygen |