From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Using WaitEventSet in the postmaster |
Date: | 2023-01-06 23:25:12 |
Message-ID: | 20230106232512.46jdujlcknqgj2ct@awork3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On 2023-01-07 11:08:36 +1300, Thomas Munro wrote:
> 1. Is it OK that we are now using APIs that might throw, in places
> where we weren't? I think so: we don't really expect WaitEventSet
> APIs to throw unless something is pretty seriously wrong, and if you
> hack things to inject failures there then you get a FATAL error and
> the postmaster exits and the children detect that. I think that is
> appropriate.
I think it's ok in principle. It might be that we'll find some thing to fix in
the future, but I don't see anything fundamental or obvious.
> 2. Is it really OK to delete the pqsignal_pm() infrastructure? I
> think so.
Same.
> 3. Is it OK to clobber the shared pending flag for SIGQUIT, SIGTERM,
> SIGINT? If you send all of these extremely rapidly, it's
> indeterminate which one will be seen by handle_shutdown_request().
That doesn't seem optimal. I'm mostly worried that we can end up downgrading a
shutdown request.
> I think that's probably acceptable? To be strict about processing only the
> first one that is delivered, I think you'd need an sa_mask to block all
> three signals, and then you wouldn't change pending_shutdown_request if it's
> already set, which I'm willing to code up if someone thinks that's
> important. (<vapourware>Ideally I would invent WL_SIGNAL to consume signal
> events serially without handlers or global variables</vapourware>.)
Hm. The need for blocking sa_mask solely comes from using one variable in
three signal handlers, right? It's not pretty, but to me the easiest fix here
seems to be have separate pending_{fast,smart,immediate}_shutdown_request
variables and deal with them in process_shutdown_request(). Might still make
sense to have one pending_shutdown_request variable, to avoid unnecessary
branches before calling process_shutdown_request().
> 5. Is the signal mask being correctly handled during forking? I
> think so: I decided to push the masking logic directly into the
> routine that forks, to make it easy to verify that all paths set the
> mask the way we want.
Hm. If I understand correctly, you used sigprocmask() directly (vs
PG_SETMASK()) in fork_process() because you want the old mask? But why do we
restore the prior mask, instead of just using PG_SETMASK(&UnBlockSig); as we
still do in a bunch of places in the postmaster?
Not that I'm advocating for that, but would there be any real harm in just
continuing to accept signals post fork? Now all the signal handlers should
just end up pointlessly setting a local variable that's not going to be read
any further? If true, it'd be good to add a comment explaining that this is
just a belt-and-suspenders thing.
> (While thinking about that I noticed that signals don't seem to be initially
> masked on Windows; I think that's a pre-existing condition, and I assume we
> get away with it because nothing reaches the fake signal dispatch code soon
> enough to break anything? Not changing that in this patch.)
It's indeed a bit odd that we do pgwin32_signal_initialize() before the
initmask() and PG_SETMASK(&BlockSig) in InitPostmasterChild(). I guess it's
kinda harmless though?
I'm now somewhat weirded out by the choice to do pg_strong_random_init() in
fork_process() rather than InitPostmasterChild(). Seems odd.
> 6. Is the naming and style OK? Better ideas welcome, but basically I
> tried to avoid all unnecessary refactoring and changes, so no real
> logic moves around, and the changes are pretty close to "mechanical".
> One bikeshed decision was what to call the {handle,process}_XXX
> functions and associated flags.
I wonder if it'd be good to have a _pm_ in the name.
> Maybe "action" isn't the best name;
Yea, I don't like it. A shutdown is also an action, etc. What about just using
_pmsignal_? It's a it odd because there's two signals in the name, but it
still feels better than 'action' and better than the existing sigusr1_handler.
> +
> +/* I/O multiplexing object */
> +static WaitEventSet *wait_set;
I'd name it a bit more obviously connected to postmaster, particularly because
it does survive into forked processes and needs to be closed there.
> +/*
> + * Activate or deactivate notifications of server socket events. Since we
> + * don't currently have a way to remove events from an existing WaitEventSet,
> + * we'll just destroy and recreate the whole thing. This is called during
> + * shutdown so we can wait for backends to exit without accepting new
> + * connections, and during crash reinitialization when we need to start
> + * listening for new connections again.
> + */
I'd maybe reference that this gets cleaned up via ClosePostmasterPorts(), it's
not *immediately* obvious.
> +static void
> +ConfigurePostmasterWaitSet(bool accept_connections)
> +{
> + if (wait_set)
> + FreeWaitEventSet(wait_set);
> + wait_set = NULL;
> +
> + wait_set = CreateWaitEventSet(CurrentMemoryContext, 1 + MAXLISTEN);
> + AddWaitEventToSet(wait_set, WL_LATCH_SET, PGINVALID_SOCKET, MyLatch, NULL);
Is there any reason to use MAXLISTEN here? We know how many sockets w're
listening on by this point, I think? No idea if the overhead matters anywhere,
but ...
I guess all the other code already does so, but afaict we don't dynamically
allocate resources there for things like ListenSocket[].
> - /* Now check the select() result */
> - if (selres < 0)
> - {
> - if (errno != EINTR && errno != EWOULDBLOCK)
> - {
> - ereport(LOG,
> - (errcode_for_socket_access(),
> - errmsg("select() failed in postmaster: %m")));
> - return STATUS_ERROR;
> - }
> - }
> + nevents = WaitEventSetWait(wait_set,
> + timeout.tv_sec * 1000 + timeout.tv_usec / 1000,
> + events,
> + lengthof(events),
> + 0 /* postmaster posts no wait_events */ );
>
> /*
> - * New connection pending on any of our sockets? If so, fork a child
> - * process to deal with it.
> + * Latch set by signal handler, or new connection pending on any of
> + * our sockets? If the latter, fork a child process to deal with it.
> */
> - if (selres > 0)
> + for (int i = 0; i < nevents; i++)
> {
Hm. This is preexisting behaviour, but now it seems somewhat odd that we might
end up happily forking a backend for each socket without checking signals
inbetween. Forking might take a while, and if a signal arrived since the
WaitEventSetWait() we'll not react to it.
> static void
> PostmasterStateMachine(void)
> @@ -3796,6 +3819,9 @@ PostmasterStateMachine(void)
>
> if (pmState == PM_WAIT_DEAD_END)
> {
> + /* Don't allow any new socket connection events. */
> + ConfigurePostmasterWaitSet(false);
Hm. Is anything actually using the wait set until we re-create it with (true)
below?
Greetings,
Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2023-01-06 23:28:16 | Re: New strategies for freezing, advancing relfrozenxid early |
Previous Message | Peter Geoghegan | 2023-01-06 23:07:04 | Re: New strategies for freezing, advancing relfrozenxid early |