From: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
---|---|
To: | Andrei Lepikhov <lepihov(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Craig Milhiser <craig(at)milhiser(dot)com>, pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: Reference to - BUG #18349: ERROR: invalid DSA memory alloc request size 1811939328, CONTEXT: parallel worker |
Date: | 2024-10-17 08:57:25 |
Message-ID: | CA+hUKGKKQh7RkUrYfEq+O1AtTgSsRrhR3_t_+dOpz1o9ebuxrA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Thu, Oct 17, 2024 at 9:12 PM Andrei Lepikhov <lepihov(at)gmail(dot)com> wrote:
> Yeah, I misunderstood the meaning of the estimated_size variable. Your
> solution is more universal. Also, I confirm, it passes my synthetic test.
> Also, it raises the immediate question: What if we have too many
> duplicates? Sometimes, in user complaints, I see examples where they,
> analysing the database's logical consistency, pass through millions of
> duplicates to find an unexpected value. Do we need a top memory
> consumption limit here? I recall a thread in the mailing list with a
> general approach to limiting backend memory consumption, but it is
> finished with no result.
It is a hard problem alright[1].
> The patch looks good as well as commentary.
Thanks, I will go ahead and push this now.
From | Date | Subject | |
---|---|---|---|
Next Message | Andrei Lepikhov | 2024-10-17 09:48:40 | Re: Reference to - BUG #18349: ERROR: invalid DSA memory alloc request size 1811939328, CONTEXT: parallel worker |
Previous Message | Amit Langote | 2024-10-17 08:15:49 | Re: BUG #18657: Using JSON_OBJECTAGG with volatile function leads to segfault |