From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Jeff Davis <pgsql(at)j-davis(dot)com> |
Cc: | Tomas Vondra <tv(at)fuzzy(dot)cz>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: 9.5: Memory-bounded HashAgg |
Date: | 2014-08-14 16:12:44 |
Message-ID: | 5306.1408032764@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Jeff Davis <pgsql(at)j-davis(dot)com> writes:
> HashJoin only deals with tuples. With HashAgg, you have to deal with a
> mix of tuples and partially-computed aggregate state values. Not
> impossible, but it is a little more awkward than HashJoin.
Not sure that I follow your point. You're going to have to deal with that
no matter what, no?
I guess in principle you could avoid the need to dump agg state to disk.
What you'd have to do is write out tuples to temp files even when you
think you've processed them entirely, so that if you later realize you
need to split the current batch, you can recompute the states of the
postponed aggregates from scratch (ie from the input tuples) when you get
around to processing the batch they got moved to. This would avoid
confronting the how-to-dump-agg-state problem, but it seems to have little
else to recommend it. Even if splitting a batch is a rare occurrence,
the killer objection here is that even a totally in-memory HashAgg would
have to write all its input to a temp file, on the small chance that it
would exceed work_mem and need to switch to batching.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2014-08-14 16:13:17 | Re: B-Tree support function number 3 (strxfrm() optimization) |
Previous Message | Bruce Momjian | 2014-08-14 16:10:12 | Re: jsonb format is pessimal for toast compression |