Re: Eager aggregation, take 3

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Richard Guo <guofenglinux(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Tender Wang <tndrwang(at)gmail(dot)com>, Paul George <p(dot)a(dot)george19(at)gmail(dot)com>, Andy Fan <zhihuifan1213(at)163(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: Eager aggregation, take 3
Date: 2025-01-24 20:53:42
Message-ID: CA+Tgmoa3+G_=8XuQWN+0ugv6r-WV6ruFESpOxpXAAKrne3oVDQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Jan 22, 2025 at 1:48 AM Richard Guo <guofenglinux(at)gmail(dot)com> wrote:
> This approach would require injecting multiple intermediate
> aggregation nodes into the path tree, for which we currently lack the
> necessary architecture. As a result, I didn't pursue this idea
> further. However, I'm really glad you mentioned this approach, though
> it's still unclear whether it's a feasible or reasonable idea.

I think the biggest question in my mind is really whether we can
accurately judge when such a strategy is likely to be a win. In this
instance it looks like we could have figured it out, but as we've
discussed, I fear a lot of estimates will be inaccurate. If we knew
they were going to be good, then I see no reason not to apply the
technique when it's sensible.

> I don't have much experience with end-user scenarios, so I'm not sure
> if it's common to have queries where the row count increases with more
> and more tables joined.

I don't think it's very common to see it increase as dramatically as
in your test case.

> > To be honest, I was quite surprised this was a percentage like 50% or
> > 80% and not a multiple like 2 or 5. And I had thought the multiplier
> > might even be larger, like 10 or more. The thing is, 50% means we only
> > have to form 2-item groups in order to justify aggregating twice.
> > Maybe SUM() is cheap enough to justify that treatment, but a more
> > expensive aggregate might not be, especially things like string_agg()
> > or array_agg() where aggregation creates bigger objects.
>
> Hmm, if I understand correctly, the "percentage" and the "multiple"
> work in the same way. Percentage 50% and multiple 2 both mean that
> the average group size is 2, and percentage 90% and multiple 10 both
> mean that the average group size is 10. In general, this relationship
> should hold: percentage = 1 - 1/multiple. However, I might not have
> grasped your point correctly.

Yes, they're equivalent. However, a percentage to me suggests that we
think that the meaningful values might be something like 20%, 50%,
80%; whereas with a multiplier someone might be more inclined to think
of values like 10, 100, 1000. You can definitely write those values as
90%, 99%, 99.9%; however, it seems less natural to me to express it
that way when we think the value will be quite close to 1. The fact
that you chose a percentage suggested to me that you were aiming for a
less-strict threshold than I had supposed we would want.

> Yeah, as you summarized, this heuristic is primarily used to discard
> unpromising paths, ensuring they aren't considered further. For the
> paths that pass this heuristic, the cost model will then determine the
> appropriate aggregation and join methods. If we take this into
> consideration when applying the heuristic, it seems to me that we
> would essentially be duplicating the work that the cost model
> performs, which doesn't seem necessary.

Well, I think we do ideally want heuristics that can reject
unpromising paths earlier. The planning cost of this is really quite
high. But I'm not sure how far we can get with this particular
heuristic. True, we could raise it to a larger value, and that might
help to rule out unpromising paths earlier. But I fear you'll quickly
find examples where it also rules out promising paths early. A good
heuristic is easy to compute and highly accurate. This heuristic is
easy to compute, but the accuracy is questionable.

--
Robert Haas
EDB: http://www.enterprisedb.com

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Chapman Flack 2025-01-24 21:01:09 Re: XMLDocument (SQL/XML X030)
Previous Message Andres Freund 2025-01-24 20:49:51 Re: BF member drongo doesn't like 035_standby_logical_decoding.pl