From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Missed check for too-many-children in bgworker spawning |
Date: | 2019-11-04 18:07:33 |
Message-ID: | 399.1572890853@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> writes:
> On 2019-Nov-04, Robert Haas wrote:
>> On Mon, Nov 4, 2019 at 10:42 AM Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> wrote:
>>> I agree with this point in principle. Everything else (queries,
>>> checkpointing) can fail, but it's critical that postmaster continues to
>>> run [...]
>> Sure, I'm not arguing that the postmaster should blow up and die.
> I must have misinterpreted you, then. But then I also misinterpreted
> Tom, because I thought it was this stability problem that was "utter
> bunkum".
I fixed the postmaster crash problem in commit 3887e9455. The residual
issue that I think is entirely bogus is that the parallel query start
code will silently continue without workers if it hits our internal
resource limit of how many bgworker ProcArray slots there are, but
not do the same when it hits the external resource limit of the
kernel refusing to fork(). I grant that there might be implementation
reasons for that being difficult, but I reject Robert's apparent
opinion that it's somehow desirable to behave that way. As things
stand, we have all of the disadvantages that you can't predict how
many workers you'll get, and none of the advantages of robustness
in the face of system resource exhaustion.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2019-11-04 18:11:57 | Re: cost based vacuum (parallel) |
Previous Message | Andres Freund | 2019-11-04 18:06:08 | Re: Excessive disk usage in WindowAgg |