Re: Big performance slowdown from 11.2 to 13.3

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Peter Geoghegan <pg(at)bowt(dot)ie>
Cc: David Rowley <dgrowleyml(at)gmail(dot)com>, "ldh(at)laurent-hasson(dot)com" <ldh(at)laurent-hasson(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Big performance slowdown from 11.2 to 13.3
Date: 2021-07-22 17:11:33
Message-ID: 789413.1626973893@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Peter Geoghegan <pg(at)bowt(dot)ie> writes:
> I also suspect that if Laurent set work_mem and/or hash_mem_multiplier
> *extremely* aggressively, then eventually the hash agg would be
> in-memory. And without actually using all that much memory.

No, he already tried, upthread. The trouble is that he's on a Windows
machine, so get_hash_mem is quasi-artificially constraining the product
to 2GB. And he needs it to be a bit more than that. Whether the
constraint is hitting at the ngroups stage or it's related to actual
memory consumption isn't that relevant.

What I'm wondering about is whether it's worth putting in a solution
for this issue in isolation, or whether we ought to embark on the
long-ignored project of getting rid of use of "long" for any
memory-size-related computations. There would be no chance of
back-patching something like the latter into v13, though.

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message ldh@laurent-hasson.com 2021-07-22 17:16:36 RE: Big performance slowdown from 11.2 to 13.3
Previous Message Peter Geoghegan 2021-07-22 17:04:31 Re: Big performance slowdown from 11.2 to 13.3