From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Missed check for too-many-children in bgworker spawning |
Date: | 2019-10-09 14:21:15 |
Message-ID: | 21667.1570630875@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Mon, Oct 7, 2019 at 4:03 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> ... Moreover, we have to --- and already do, I trust --- deal with
>> other resource-exhaustion errors in exactly the same code path, notably
>> fork(2) failure which we simply can't predict or prevent. Doesn't the
>> parallel query logic already deal sanely with failure to obtain as many
>> workers as it wanted?
> If we fail to obtain workers because there are not adequate workers
> slots available, parallel query deals with that smoothly. But, once
> we have a slot, any further failure will trigger the parallel query to
> ERROR out.
Well, that means we have a not-very-stable system then.
We could improve on matters so far as the postmaster's child-process
arrays are concerned, by defining separate slot "pools" for the different
types of child processes. But I don't see much point if the code is
not prepared to recover from a fork() failure --- and if it is, that
would a fortiori deal with out-of-child-slots as well.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2019-10-09 14:28:27 | Re: BUG #16045: vacuum_db crash and illegal memory alloc after pg_upgrade from PG11 to PG12 |
Previous Message | Tomas Vondra | 2019-10-09 14:18:41 | Re: BUG #16045: vacuum_db crash and illegal memory alloc after pg_upgrade from PG11 to PG12 |