From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>, Matt Smiley <msmiley(at)gitlab(dot)com>, Nikolay Samokhvalov <nik(at)postgres(dot)ai>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Configurable FP_LOCK_SLOTS_PER_BACKEND |
Date: | 2023-08-08 20:44:37 |
Message-ID: | CA+Tgmobn1=TS2qQMcj3dgH80ud2of=oGHw_31=t4L05ZYkroCQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Aug 7, 2023 at 6:05 PM Andres Freund <andres(at)anarazel(dot)de> wrote:
> I think the biggest flaw of the locking scheme is that the LockHash locks
> protect two, somewhat independent, things:
> 1) the set of currently lockable objects, i.e. the entries in the hash table [partition]
> 2) the state of all the locks [in a partition]
>
> It'd not be that hard to avoid the shared hashtable lookup in a number of
> cases, e.g. by keeping LOCALLOCK entries around for longer, as I suggest
> above. But we can't, in general, avoid the lock on the partition anyway, as
> the each lock's state is also protected by the partition lock.
Yes, and that's a huge problem. The main selling point of the whole
fast-path mechanism is to ease the pressure on the lock manager
partition locks, and if we did something like what you described in
the previous email without changing the locking regimen, we'd bring
all of that contention back. I'm pretty sure that would suck.
> The amount of work to do a lookup in the shared hashtable and/or create a new
> entry therein, is quite bound. But the work for acquiring a lock is much less
> so. We'll e.g. often have to iterate over the set of lock holders etc.
>
> I think we ought to investigate whether pushing down the locking for the "lock
> state" into the individual locks is worth it. That way the partitioned lock
> would just protect the hashtable.
I think this would still suck. Suppose you put an LWLock or slock_t in
each LOCK. If you now run a lot of select queries against the same
table (e.g. pgbench -S -c 64 -j 64), everyone is going to fight over
the lock counts for that table. Here again, the value of the fast-path
system is that it spreads out the contention in ways that approaches
like this can't do.
Or, hmm, maybe what you're really suggesting is pushing the state down
into each PROCLOCK rather than each LOCK. That would be more promising
if we could do it, because that is per-lock *and also per-backend*.
But you can't decide from looking at a single PROCLOCK whether a new
lock at some given lock mode is grantable or not, at least not with
the current PROCLOCK representation.
I think any workable solution here has to allow a backend to take a
weak relation lock without contending with other backends trying to
take the same weak relation lock (provided there are no strong
lockers). Maybe backends should be able to allocate PROCLOCKs and
record weak relation locks there without actually linking them up to
LOCK objects, or something like that. Anyone who wants a strong lock
must first go and find all of those objects for the LOCK they want and
connect them up to that LOCK.
--
Robert Haas
EDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2023-08-08 20:44:52 | Re: Cirrus-ci is lowering free CI cycles - what to do with cfbot, etc? |
Previous Message | Chapman Flack | 2023-08-08 20:30:10 | Re: Extract numeric [field] in JSONB more effectively |