From: | Craig Ringer <craig(dot)ringer(at)enterprisedb(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Improving LWLock wait events |
Date: | 2020-12-23 07:51:50 |
Message-ID: | CAGRY4nzd2LHe1fORZzqGz6sHunB9fr9z2Ass4=Mff34EwpHGuQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, 21 Dec 2020 at 05:27, Andres Freund <andres(at)anarazel(dot)de> wrote:
> Hi,
>
> The current wait events are already pretty useful. But I think we could
> make them more informative without adding real runtime overhead.
>
>
All 1-3 sound pretty sensible to me.
I also think there's a 4, but I think the tradeoffs are a bit more
> complicated:
>
> 4) For a few types of lwlock just knowing the tranche isn't
> sufficient. E.g. knowing whether it's one or different buffer mapping locks
> are being waited on is important to judge contention.
>
I've struggled with this quite a bit myself.
In particular, for tools that validate acquire-ordering safety it's
desirable to be able to identify a specific lock in a backend-independent
way.
The hardest part would be to know how to identify individual locks. The
> easiest would probably be to just mask in a parts of the lwlock address
> (e.g. shift it right by INTALIGN, and then mask in the result into the
> eventId). That seems a bit unsatisfying.
>
It also won't work reliably for locks in dsm segments, since the lock can
be mapped to a different address in different backends.
We could probably do a bit better: We could just store the information about
> tranche / offset within tranche at LWLockInitialize() time, instead of
> computing something just before waiting. While LWLock.tranche is only
> 16bits
> right now, the following two bytes are currently padding...
>
> That'd allow us to have proper numerical identification for nearly all
> tranches, without needing to go back to the complexity of having tranches
> specify base & stride.
>
That sounds appealing. It'd work for any lock in MainLWLockArray - all
built-in individual LWLocks, LWTRANCHE_BUFFER_MAPPING,
LWTRANCHE_LOCK_MANAGER, LWTRANCHE_PREDICATE_LOCK_MANAGER, any lock
allocated by RequestNamedLWLockTranche().
Some of the other tranches allocate locks in contiguous fixed blocks or in
ways that would let them maintain a counter.
We'd need some kind of "unknown" placeholder value for LWLocks where that
doesn't make sense, though, like most locks allocated by callers that make
their own LWLockNewTrancheId() call and locks in some of the built-in
tranches not allocated in MainLWLockArray.
So I suggest retaining the current LWLockInitialize() and making it a
wrapper for LWLockInitializeWithIndex() or similar. Use a 1-index and keep
0 as unknown, or use 0-index and use (max-1) as unknown.
From | Date | Subject | |
---|---|---|---|
Next Message | Craig Ringer | 2020-12-23 07:56:32 | Re: Improving LWLock wait events |
Previous Message | Heikki Linnakangas | 2020-12-23 07:41:43 | Re: Perform COPY FROM encoding conversions in larger chunks |