Re: Eager aggregation, take 3

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Richard Guo <guofenglinux(at)gmail(dot)com>, Tender Wang <tndrwang(at)gmail(dot)com>, Paul George <p(dot)a(dot)george19(at)gmail(dot)com>, Andy Fan <zhihuifan1213(at)163(dot)com>, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: Eager aggregation, take 3
Date: 2025-01-20 18:39:15
Message-ID: CA+TgmoZqHFVaa7RXf_PFUGGbe8Yn=APc2cqV7pj3-essintNgQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Jan 20, 2025 at 12:57 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> However, a partial-aggregation path does not generate the same data
> as an unaggregated path, no matter how fuzzy you are willing to be
> about the concept. So I'm having a very hard time accepting that
> it ought to be part of the same RelOptInfo, and thus I don't really
> buy that annotating paths with a GroupPathInfo is the way forward.

Seems like a fair argument. I'm not sure it's dispositive if practical
considerations merited the opposite treatment, but that doesn't seem
to be the case.

> What this line of analysis doesn't tell us though is whether paths
> that did their partial aggregations at different join levels can be
> considered as enough alike that they should compete on cost terms.
> If they are, we need to put them into the same RelOptInfo. So
> while I want to have separate RelOptInfos for partially aggregated
> paths, I'm unclear on how many of those we need or what their
> identifying property is.
>
> Also: we avoid generating parameterized partial paths, because
> combining those things would be too much of a mess. There's some
> handwaving in the comments for add_partial_path to the effect that
> it wouldn't be a win anyway, but I think the real reason is that
> it'd be far too complicated for the potential value. Can we make
> a similar argument for partial aggregation? I sure hope so.

I think your hopes will be dashed in this instance. Suppose we have:

SELECT m.mapped_value, SUM(g.summable_quantity)
FROM mapping_table m JOIN gigantic_dataset g
WHERE m.raw_value = g.raw_value GROUP BY 1;

If the mapping_table is small, we can do something like this:

FinalizeAggregate
-> Gather
-> PartialAggregate
-> Hash Join

But if mapping_table is big but relatively few of the keys appear as
raw values in gigantic_dataset, being able to do the PartialAggregate
before the join would be rather nice; and you wouldn't want to give up
on parallel query in such a case.

P.S. I'm not so sure you're right about the reason why this is
supported. We can create a partial path for a joinrel by picking a
partial path on one side and a non-partial path on the other side, so
we can get NestLoops below Gather just fine using the parameterized
paths that we're generating anyway to support non-parallel cases. But
what would the plan look like if we were using a partial,
parameterized path? That path would have to be on the inner side of a
nested loo, and as far as I can see it would need to have a Gather
node on top of it and below the Nested Loop, so you're talking about
something like this:

Nested Loop
-> Seq Scan on something
-> Gather
-> Nested Loop
-> Index Scan on otherthing
Index Cond: otherthing.x = something.x
-> Whatever Scan on whatever

But putting Gather on the inner side of a nested loop like that would
mean repeatedly starting up workers and shutting them down again which
seems no fun at all. If there's some way of using a partial,
parameterized path that doesn't involve sticking a Gather on the inner
side of a Nested Loop, then the technique might have some promise and
the existing comment (which I probably wrote) is likely bunk.

> > I agree that creating an exponential number of RelOptInfos is not
> > going to work out well.
>
> FWIW, I'm way more concerned about the number of Paths considered
> than I am about the number of RelOptInfos. This relates to your
> question about whether we want to use some heuristics to limit
> the planner's search space.

I had that instinct, too, but I'm not 100% sure whether it was a
correct instinct. If we create too many Paths, it's possible that most
of them will be thrown away before we really do anything with them, in
which case they cost CPU cycles but there's no memory accumulation.
Mixing paths that perform the partial aggregation at different levels
into the same RelOptInfo also increases the chances that you're going
to throw away a lot of stuff early. On the other hand, if we create
too many RelOptInfos, that memory can't be freed until the end of the
planning cycle. If you wouldn't have minded waiting a long time for
the planner, but you do mind running out of memory, the second one is
worse. But of course, the best option is to consider neither too many
Paths nor too many RelOptInfos.

I have heard rumors that in some other systems, they decide on the
best serial plan first and then insert parallel operators afterward.
Something like that could potentially be done here, too: only explore
eager aggregation for join orders that are sub-parts of the best
non-eagerly-aggregated join order. But I am sort of hesitant to
propose it as a development direction because we've never done
anything like that before and I don't think it would be at all easy to
implement. Nonetheless, I can't help feeling like we're kidding
ourselves a little bit, not just with this patch but in general. We
talk about "pushing down" aggregates or sorts or operations that can
be done on foreign nodes, but that implies that we start with them at
the top and then try to move them downward. In fact, we consider them
everywhere and expect the pushed-down versions to win out on cost.
While that feels sensible to some degree, it means every major new
query planning technique tends to multiply the amount of planner work
we're doing rather than adding to it. I'm fairly sure that the best
parallel plan need not be a parallelized version of the best
non-parallel plan but it often is, and the more things parallelism
supports, the more likely it is that it will be (I think). With eager
aggregation, it feels like we're multiplying the number of times that
we replan the same join tree by a number that is potentially MUCH
larger than 2, yet it seems to me that much of that extra work is
unlikely to find anything. Even if we find a way to make it work here
without too much pain, I wonder what happens when the next interesting
optimization comes along. Multiplication by a constant greater than or
equal to two isn't an operation one can do too many times, generally
speaking, without ending up with a big number.

--
Robert Haas
EDB: http://www.enterprisedb.com

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Jacob Champion 2025-01-20 18:39:50 Re: [PATCH] Improve code coverage of network address functions
Previous Message Sami Imseih 2025-01-20 18:26:13 Re: Bug in detaching a partition with a foreign key.