From: | David Rowley <david(dot)rowley(at)2ndquadrant(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Jeff Davis <pgsql(at)j-davis(dot)com> |
Subject: | Re: Spilling hashed SetOps and aggregates to disk |
Date: | 2018-06-05 02:56:23 |
Message-ID: | CAKJS1f-bnfCjMewwGf4nu1wAfFPv4bSch0qk7XfHDjFcbvmDLQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 5 June 2018 at 06:52, Andres Freund <andres(at)anarazel(dot)de> wrote:
> That part has gotten a bit easier since, because we have serialize /
> deserialize operations for aggregates these days.
True. Although not all built in aggregates have those defined.
> I wonder whether, at least for aggregates, the better fix wouldn't be to
> switch to feeding the tuples into tuplesort upon memory exhaustion and
> doing a sort based aggregate. We have most of the infrastructure to do
> that due to grouping sets. It's just the pre-existing in-memory tuples
> that'd be problematic, in that the current transition values would need
> to serialized as well. But with a stable sort that'd not be
> particularly problematic, and that could easily be achieved.
Isn't there still a problem determining when the memory exhaustion
actually happens though? As far as I know, we've still little
knowledge how much memory each aggregate state occupies.
Jeff tried to solve this in [1], but from what I remember, there was
too much concern about the overhead of the additional accounting code.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Jonathan S. Katz | 2018-06-05 03:08:21 | Re: Code of Conduct plan |
Previous Message | Peter Eisentraut | 2018-06-05 02:00:53 | Re: Make deparsing of column defaults faster |