From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [HACKERS] parallel.c oblivion of worker-startup failures |
Date: | 2017-12-20 21:09:45 |
Message-ID: | CA+TgmoasUSVQqsBnp2_AUWuPB65ypdNL2p1-UG9mhf+iubhJzA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Dec 19, 2017 at 11:28 PM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> That is not the main point I am bothered about. I am concerned that
> the patch proposed by you can lead to hang if there is a crash (abrupt
> failure like proc_exit(1)) after attaching to the error queue. This is
> explained in my email up thread [1]. I think we can fix it by trying
> to read the queues as mentioned by you, but was not sure if that is
> the best way to deal with it, so proposed an alternate solution.
OK. My core point is that I really, really don't want Gather or
Gather Merge or parallel CREATE INDEX to have to worry about these
cases; I believe strongly that it is best if these cases cause an
error at the ParallelContext layer. I read your email as suggesting
the opposite, but maybe I misunderstood you.
One thing I thought about is introducing a new state
BGWH_STARTUP_FAILED, as distinguished from BGWH_STOPPED. However, it
seems problematic. Once the worker is unregistered we wouldn't know
which one to return, especially if the slot has been reused.
Furthermore, it doesn't help in the case where the worker starts and
immediately exits without attaching to the DSM. So it doesn't seem to
me that we can fix the problem this way.
It seems to me that we instead have to solve the problem in the
ParallelContext layer, which can understand the state of the worker by
comparing the state of the background worker to what we can find out
via the DSM. The background worker layer knows whether or not the
worker is running and can signal us when it changes state, but it
can't know whether the worker exited cleanly or uncleanly. To know
that, we have to look at things like whether it sent us a Terminate
message before it stopped.
While I think that the clean-vs-unclean distinction has to be made at
the ParallelContext layer, it doesn't necessarily have to happen by
inserting an error queue. I think that may be a convenient way; for
example, we could plug the hole you mentioned before by attaching an
on_dsm_detach callback that signals the leader with
SendProcSignal(..., PROCSIG_PARALLEL_MESSAGE, ...). At that point,
all of the information required for the leader to know that the
failure has occurred is present in shared memory; it is only that the
leader may not have anything to trigger it to look at the relevant
bits of information, and this would fix that. But there are probably
other ways the information could also be communicated as well.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Janes | 2017-12-20 21:20:28 | Re: Bitmap table scan cost per page formula |
Previous Message | Tomas Vondra | 2017-12-20 20:45:29 | Re: Tracking of page changes for backup purposes. PTRACK [POC] |