Re: MAX_BACKENDS size (comment accuracy)

From: Jacob Brazeal <jacob(dot)brazeal(at)gmail(dot)com>
To: Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: MAX_BACKENDS size (comment accuracy)
Date: 2025-01-26 20:55:15
Message-ID: CA+COZaAURmfeXjgsFSr2TWRa6r0PYVhf=FVEeEdVGx13VWSyqw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

I find I didn't send the previous reply to the mailing list, so I'll copy
it here.
---
The patch series looks good. It looks like this currently leaves 10 bits of
unused space (bits 20 - 29) in the state.

> StaticAssertDecl((MAX_BACKENDS & LW_FLAG_MASK) == 0,
> "MAX_BACKENDS and LW_FLAG_MASK overlap");

Should this check that MAX_BACKENDS & LW_LOCK_MASK == 0? To also ensure the
LW_VAL_EXCLUSIVE bit does not overlap.

> I continue to believe that MAX_BACKENDS of 2^16-1 would be sufficient -
we're
> far from that being a realistic limit. Halfing the size of LWLock and
laying
> the ground work for making the wait-list lock-free imo would be very well
> worth the reduction in an unrealistic limit...

Neat. The current implementation of queuing does seem pretty heavy, and I'd
have time to work on a lock-free version. It seems like the waitlist state
itself could be managed similarly to LWLockAttemptLock, with an atomic
compare-and-set. I'm not quite sure how to manage the full proclist queue,
since only the head and tail would actually be part of the LWLock; would we
need to do something like copy the whole list, add our process to the
copied queue, and then swap out the reference to the new list in the LWLock?

> PS: FYI, this list values properly quoting messages instead of replying
on top
> of the entire quoted messages.

Oops, thank you for the heads up. Hopefully this reply is formatted
correctly, I'm still getting the hang of things.

Regards,
Jacob

On Sun, Jan 26, 2025 at 12:41 PM Jacob Brazeal <jacob(dot)brazeal(at)gmail(dot)com>
wrote:

> The patch series looks good. It looks like this currently leaves 10 bits
> of unused space (bits 20 - 29) in the state.
>
> > StaticAssertDecl((MAX_BACKENDS & LW_FLAG_MASK) == 0,
> > "MAX_BACKENDS and LW_FLAG_MASK overlap");
>
> Should this check that MAX_BACKENDS & LW_LOCK_MASK == 0? To also ensure
> the LW_VAL_EXCLUSIVE bit does not overlap.
>
> > I continue to believe that MAX_BACKENDS of 2^16-1 would be sufficient -
> we're
> > far from that being a realistic limit. Halfing the size of LWLock and
> laying
> > the ground work for making the wait-list lock-free imo would be very well
> > worth the reduction in an unrealistic limit...
>
> Neat. The current implementation of queuing does seem pretty heavy, and
> I'd have time to work on a lock-free version. It seems like the waitlist
> state itself could be managed similarly to LWLockAttemptLock, with an
> atomic compare-and-set. I'm not quite sure how to manage the full proclist
> queue, since only the head and tail would actually be part of the LWLock;
> would we need to do something like copy the whole list, add our process to
> the copied queue, and then swap out the reference to the new list in the
> LWLock?
>
> > PS: FYI, this list values properly quoting messages instead of replying
> on top
> > of the entire quoted messages.
>
> Oops, thank you for the heads up. Hopefully this reply is formatted
> correctly, I'm still getting the hang of things.
>
>
> Regards,
> Jacob
>

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Jacob Brazeal 2025-01-26 20:57:50 Re: MAX_BACKENDS size (comment accuracy)
Previous Message Robert Treat 2025-01-26 20:11:42 Re: Eagerly scan all-visible pages to amortize aggressive vacuum