Re: Eager aggregation, take 3

From: Richard Guo <guofenglinux(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Paul George <p(dot)a(dot)george19(at)gmail(dot)com>, Andy Fan <zhihuifan1213(at)163(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, pgsql-hackers(at)lists(dot)postgresql(dot)org
Subject: Re: Eager aggregation, take 3
Date: 2024-08-29 02:26:01
Message-ID: CAMbWs4-ov_aOhXTnVCH6SCW5KGpMbxUZtGKj=tqrqYyp8LXGuA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Aug 23, 2024 at 11:59 PM Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> Here are some initial, high-level thoughts about this patch set.

Thank you for your review and feedback! It helps a lot in moving this
work forward.

> 1. As far as I can see, there's no real performance testing on this
> thread. I expect that it's possible to show an arbitrarily large gain
> for the patch by finding a case where partial aggregation is way
> better than anything we currently know, but that's not very
> interesting. What I think would be useful to do is find a corpus of
> existing queries on an existing data set and try them with and without
> the patch and see which query plans change and whether they're
> actually better. For example, maybe TPC-H or the subset of TPC-DS that
> we can actually run would be a useful starting point. One could then
> also measure how much the planning time increases with the patch to
> get a sense of what the overhead of enabling this feature would be.
> Even if it's disabled by default, people aren't going to want to
> enable it if it causes planning times to become much longer on many
> queries for which there is no benefit.

Right. I haven’t had time to run any benchmarks yet, but that is
something I need to do.

> 2. I think there might be techniques we could use to limit planning
> effort at an earlier stage when the approach doesn't appear promising.
> For example, if the proposed grouping column is already unique, the
> exercise is pointless (I think). Ideally we'd like to detect that
> without even creating the grouped_rel. But the proposed grouping
> column might also be *mostly* unique. For example, consider a table
> with a million rows and a column 500,000 distinct values. I suspect it
> will be difficult for partial aggregation to work out to a win in a
> case like this, because I think that the cost of performing the
> partial aggregation will not reduce the cost either of the final
> aggregation or of the intervening join steps by enough to compensate.
> It would be best to find a way to avoid generating a lot of rels and
> paths in cases where there's really not much hope of a win.
>
> One could, perhaps, imagine going further with this by postponing
> eager aggregation planning until after regular paths have been built,
> so that we have good cardinality estimates. Suppose the query joins a
> single fact table to a series of dimension tables. The final plan thus
> uses the fact table as the driving table and joins to the dimension
> tables one by one. Do we really need to consider partial aggregation
> at every level? Perhaps just where there's been a significant row
> count reduction since the last time we tried it, but at the next level
> the row count will increase again?
>
> Maybe there are other heuristics we could use in addition or instead.

Yeah, one of my concerns with this work is that it can use
significantly more CPU time and memory during planning once enabled.
It would be great if we have some efficient heuristics to limit the
effort. I'll work on that next and see what happens.

> 3. In general, we are quite bad at estimating what will happen to the
> row count after an aggregation, and we have no real idea what the
> distribution of values will be. That might be a problem for this
> patch, because it seems like the decisions we will make about where to
> perform the partial aggregation might end up being quite random. At
> the top of the join tree, I'll need to compare directly aggregating
> the best join path with various paths that involve a finalize
> aggregation step at the top and a partial aggregation step further
> down. But my cost estimates and row counts for the partial aggregate
> steps seem like they will often be quite poor, which means that the
> plans that use those partial aggregate steps might also be quite poor.
> Even if they're not, I fear that comparing the cost of those
> PartialAggregate-Join(s)-FinalizeAggregate paths to the direct
> Aggregate path will look too much like comparing random numbers. We
> need to know whether the combination of the FinalizeAggregate step and
> the PartialAggregate step will be more or less expensive than a plain
> old Aggregate, but how can we tell that if we don't have accurate
> cardinality estimates?

Yeah, I'm concerned about this too. In addition to the inaccuracies
in aggregation estimates, our estimates for joins are sometimes not
very accurate either. All this are likely to result in regressions
with eager aggregation in some cases. Currently I don't have a good
answer to this problem. Maybe we can run some benchmarks first and
investigate the regressions discovered on a case-by-case basis to better
understand the specific issues.

Thanks
Richard

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Richard Guo 2024-08-29 02:29:26 Re: Eager aggregation, take 3
Previous Message Zhijie Hou (Fujitsu) 2024-08-29 02:05:49 RE: Collect statistics about conflicts in logical replication