From: | David Rowley <dgrowleyml(at)gmail(dot)com> |
---|---|
To: | Peter Geoghegan <pg(at)bowt(dot)ie> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Jeff Davis <pgsql(at)j-davis(dot)com>, "ldh(at)laurent-hasson(dot)com" <ldh(at)laurent-hasson(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Big performance slowdown from 11.2 to 13.3 |
Date: | 2021-07-22 16:17:45 |
Message-ID: | CAApHDvqtORmfiEZnywZxui-CrdtmA4hfHaJfmjWm5iLq3TPxVw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Fri, 23 Jul 2021 at 04:14, Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
>
> On Thu, Jul 22, 2021 at 8:45 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> > That is ... weird. Maybe you have found a bug in the spill-to-disk logic;
> > it's quite new after all. Can you extract a self-contained test case that
> > behaves this way?
>
> I wonder if this has something to do with the way that the input data
> is clustered. I recall noticing that that could significantly alter
> the behavior of HashAggs as of Postgres 13.
Isn't it more likely to be reaching the group limit rather than the
memory limit?
if (input_groups * hashentrysize < hash_mem * 1024L)
{
if (num_partitions != NULL)
*num_partitions = 0;
*mem_limit = hash_mem * 1024L;
*ngroups_limit = *mem_limit / hashentrysize;
return;
}
There are 55 aggregates on a varchar(255). I think hashentrysize is
pretty big. If it was 255*55 then only 765591 groups fit in the 10GB
of memory.
David
From | Date | Subject | |
---|---|---|---|
Next Message | ldh@laurent-hasson.com | 2021-07-22 16:18:55 | RE: Big performance slowdown from 11.2 to 13.3 |
Previous Message | ldh@laurent-hasson.com | 2021-07-22 16:16:34 | RE: Big performance slowdown from 11.2 to 13.3 |