From: | Craig Milhiser <craig(at)milhiser(dot)com> |
---|---|
To: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: Reference to - BUG #18349: ERROR: invalid DSA memory alloc request size 1811939328, CONTEXT: parallel worker |
Date: | 2024-09-24 01:43:50 |
Message-ID: | CA+wnhO2NDZFWB1TrZga-uLzpfSDjKH5Axa2d2+1Q76JSsS7yMg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Sun, Sep 22, 2024 at 10:23 PM Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
wrote:
> On Mon, Sep 23, 2024 at 1:46 PM Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
> wrote:
> > 432 bytes
>
> Oh, as Tomas pointed out in the referenced thread,
Thanks for working on it and the detailed explanation. I tested set
max_parallel_workers_per_gather = 0 from the original thread and it was
working. We are putting that into the application, for our largest
customers. Set to 0 before the query then back to 2 after.
Your explanation also shows why rewriting of the query works. I reduced the
number of rows being processed much earlier in the query. The query was
written with 1 set of many joins which worked on millions of rows then
reduced to a handful. I broke this into a materialized CTE that forced
Postgres to reduce the rows early then do the joins. Rewriting the query
is better regardless of this issue.
I am working on getting a stock Postgres in our production protected
enclave with our production database. Probably a full day of work that I
need to splice in. We have a similar mechanism in our development
environment. Once working I can help test and debug any changes. I can also
work on a reproducible example.
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Paquier | 2024-09-24 01:56:27 | Re: vacuumdb: permission denied for schema "pg_temp_7" |
Previous Message | Mark Kostevych | 2024-09-24 01:42:07 | Re: Can't fix Pgsql Insert Command Issue. |