From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "ldh(at)laurent-hasson(dot)com" <ldh(at)laurent-hasson(dot)com> |
Cc: | David Rowley <dgrowleyml(at)gmail(dot)com>, Peter Geoghegan <pg(at)bowt(dot)ie>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Big performance slowdown from 11.2 to 13.3 |
Date: | 2021-07-22 15:56:49 |
Message-ID: | 785218.1626969409@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
I wrote:
> "ldh(at)laurent-hasson(dot)com" <ldh(at)laurent-hasson(dot)com> writes:
>> It's still spilling to disk and seems to cap at 2.5GB of memory usage in spite of configuration.
> That is ... weird.
Oh: see get_hash_mem:
hash_mem = (double) work_mem * hash_mem_multiplier;
/*
* guc.c enforces a MAX_KILOBYTES limitation on work_mem in order to
* support the assumption that raw derived byte values can be stored in
* 'long' variables. The returned hash_mem value must also meet this
* assumption.
*
* We clamp the final value rather than throw an error because it should
* be possible to set work_mem and hash_mem_multiplier independently.
*/
if (hash_mem < MAX_KILOBYTES)
return (int) hash_mem;
return MAX_KILOBYTES;
So basically, we now have a hard restriction that hashaggs can't use
more than INT_MAX kilobytes, or approximately 2.5GB, and this use case
is getting eaten alive by that restriction. Seems like we need to
do something about that.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | David Rowley | 2021-07-22 16:04:00 | Re: Big performance slowdown from 11.2 to 13.3 |
Previous Message | Tom Lane | 2021-07-22 15:45:02 | Re: Big performance slowdown from 11.2 to 13.3 |