From: | Peter Geoghegan <pg(at)bowt(dot)ie> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Jeff Davis <pgsql(at)j-davis(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, David Rowley <dgrowleyml(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, Andres Freund <andres(at)anarazel(dot)de>, Bruce Momjian <bruce(at)momjian(dot)us>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Melanie Plageman <melanieplageman(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Default setting for enable_hashagg_disk |
Date: | 2020-07-25 16:39:50 |
Message-ID: | CAH2-Wz=ur7MQKpaUZJP=Adtg0TPMx5M_WoNE=ke2vUU=amdjPQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-docs pgsql-hackers |
On Fri, Jul 24, 2020 at 12:55 PM Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
> Could that be caused by clustering in the data?
>
> If the input data is in totally random order then we have a good
> chance of never having to spill skewed "common" values. That is, we're
> bound to encounter common values before entering spill mode, and so
> those common values will continue to be usefully aggregated until
> we're done with the initial groups (i.e. until the in-memory hash
> table is cleared in order to process spilled input tuples). This is
> great because the common values get aggregated without ever spilling,
> and most of the work is done before we even begin with spilled tuples.
>
> If, on the other hand, the common values are concentrated together in
> the input...
I still don't know if that was a factor in your example, but I can
clearly demonstrate that the clustering of data can matter a lot to
hash aggs in Postgres 13. I attach a contrived example where it makes
a *huge* difference.
I find that the sorted version of the aggregate query takes
significantly longer to finish, and has the following spill
characteristics:
"Peak Memory Usage: 205086kB Disk Usage: 2353920kB HashAgg Batches: 2424"
Note that the planner doesn't expect any partitions here, but we still
get 2424 batches -- so the planner seems to get it totally wrong.
OTOH, the same query against a randomized version of the same data (no
longer in sorted order, no clustering) works perfectly with the same
work_mem (200MB):
"Peak Memory Usage: 1605334kB"
Hash agg avoids spilling entirely (so the planner gets it right this
time around). It even uses notably less memory.
--
Peter Geoghegan
Attachment | Content-Type | Size |
---|---|---|
test-agg-sorted.sql | application/octet-stream | 867 bytes |
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2020-07-25 17:07:37 | Re: Default setting for enable_hashagg_disk |
Previous Message | Jeff Davis | 2020-07-25 16:38:03 | Re: Default setting for enable_hashagg_disk |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2020-07-25 16:57:50 | Re: Mark unconditionally-safe implicit coercions as leakproof |
Previous Message | Jeff Davis | 2020-07-25 16:38:03 | Re: Default setting for enable_hashagg_disk |