From: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
---|---|
To: | Craig Milhiser <craig(at)milhiser(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: Reference to - BUG #18349: ERROR: invalid DSA memory alloc request size 1811939328, CONTEXT: parallel worker |
Date: | 2024-09-23 01:46:07 |
Message-ID: | CA+hUKGJo9H75w3QoZrZCVm3eGg7ifJ0UvZZMw6bz=HCpwg0gKw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Mon, Sep 23, 2024 at 12:52 PM Craig Milhiser <craig(at)milhiser(dot)com> wrote:
> On Sun, Sep 22, 2024 at 11:02 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Craig Milhiser <craig(at)milhiser(dot)com> writes:
>> > Has the referenced bug in this discussion[1] been released?
>>
>> I don't see any indication, either in that thread or in the commit
>> log, that anything has been done in this area since about 16.2.
>> It's not an easy problem in general.
>>
>> Having said that, Aurora is not Postgres, and I don't know
>> how closely they track us. Can you reproduce this problem
>> on a stock build of community Postgres?
>>
>> regards, tom lane
>
>
> Thanks. I will work on setting that up. Also getting the aws team involved.
>
> The one query I investigated I rewrote. It took 15 seconds without parallel to avoid this issue. I rewrote it and now the query completes in 0.2 seconds. For this query I can avoid the issue, at least temporarily, by making a better query. But we need to solve the real problem. And I have not looked at the other queries affecting me. I may not get so lucky again.
>
> I will post when I get the stock Postgres setup and running.
Hi,
FYI this is on my radar and it would be good to try to make a small
back-patchable improvement. I would need to swap the problem back
into memory to be sure but from memory the problem is that parallel
hash join partitions take 432 bytes of book keeping memory each (there
is also the problem that they each have output buffers, but those are
not allocated in one big array, and for non-parallel hash join there
is also a per-partition overhead, but it's smaller due to less
bookkeeping state so we don't hear about it). Hash joins use
partition files (AKA batches) to try to keep each hash table under
work_mem * hash_mem_multiplier, and if you have 2^22 (~4 million)
partitions we therefore try to allocate 1811939328 bytes of memory
(the number in $SUBJECT), exceeding our arbitrary 1GB allocation
limit. It's possible to turn that arbitrary allocation limit off, but
that'd be treating a symptom and certainly not really produce good
performance. If you increased (probably at least by double) work_mem
or hash_mem_multiplier, you might have better luck: at some cross-over
point, doubling the size of the array of partitions uses more memory
than you can save by (potentially) halving the size of the hash table!
Even aside from that arithmetic problem, anything more than around
2^10 partitions (~1 thousand) will start to perform worse unless you
also increase max_files_per_process to match, because every flush of a
page of spill file will likely have to close and open a file (on stock
PostgreSQL at least, but Aurora may have a completely different scheme
for temporary spill data for all I know). So we could simply cap the
number of partitions, and start ignoring the work_mem *
hash_mem_multiplier limit beyond that cap, but we haven't done it yet
because it's hard to pick a number and reports are rare (ie very large
queries run with low work_mem, if that is indeed the problem here).
2^21 would be the highest plausible candidate (2^21 * 432 = ~900MB),
but it's still very high. There is a related problem of extreme skew
(squillions of tuples in one bucket), which is much harder to treat,
but ideas were mentioned in that and other threads... For the
non-skewed version of the problem, which may be more common, at one
level at least the problem is the defaults being set for small memory
machines, people running increasingly huge joins on huge memory
machines, and the machinery to make it work being a bit naive and
excessively expensive. We could and should invent better strategies
for coping.
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Munro | 2024-09-23 02:23:17 | Re: Reference to - BUG #18349: ERROR: invalid DSA memory alloc request size 1811939328, CONTEXT: parallel worker |
Previous Message | Craig Milhiser | 2024-09-23 00:52:20 | Re: Reference to - BUG #18349: ERROR: invalid DSA memory alloc request size 1811939328, CONTEXT: parallel worker |