From: | Jeremy Schneider <schnjere(at)amazon(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de>, Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>, Matt Smiley <msmiley(at)gitlab(dot)com>, Nikolay Samokhvalov <nik(at)postgres(dot)ai>, <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Configurable FP_LOCK_SLOTS_PER_BACKEND |
Date: | 2023-09-06 19:09:06 |
Message-ID: | 22d09a40-22ec-4328-921f-e38d9acf0ea7@amazon.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 8/8/23 3:04 PM, Andres Freund wrote:
> On 2023-08-08 16:44:37 -0400, Robert Haas wrote:
>> On Mon, Aug 7, 2023 at 6:05 PM Andres Freund <andres(at)anarazel(dot)de> wrote:
>>> I think the biggest flaw of the locking scheme is that the LockHash locks
>>> protect two, somewhat independent, things:
>>> 1) the set of currently lockable objects, i.e. the entries in the hash table [partition]
>>> 2) the state of all the locks [in a partition]
>>>
>>> It'd not be that hard to avoid the shared hashtable lookup in a number of
>>> cases, e.g. by keeping LOCALLOCK entries around for longer, as I suggest
>>> above. But we can't, in general, avoid the lock on the partition anyway, as
>>> the each lock's state is also protected by the partition lock.
>>
>> Yes, and that's a huge problem. The main selling point of the whole
>> fast-path mechanism is to ease the pressure on the lock manager
>> partition locks, and if we did something like what you described in
>> the previous email without changing the locking regimen, we'd bring
>> all of that contention back. I'm pretty sure that would suck.
>
> Yea - I tried to outline how I think we could implement the fastpath locking
> scheme in a less limited way in the earlier email, that I had quoted above
> this bit. Here I was pontificating on what we possibly should do in addition
> to that. I think even if we had "unlimited" fastpath locking, there's still
> enough pressure on the lock manager locks that it's worth improving the
> overall locking scheme.
Has anyone considered whether increasing NUM_LOCK_PARTITIONS to
something bigger than 16 might offer cheap/easy/small short-term
improvements while folks continue to think about the bigger long-term ideas?
I haven't looked deeply into it myself yet. Didn't see a mention in this
thread or in Matt's gitlab research ticket. Maybe it doesn't actually
help. Anyway Alexander Pyhalov's email about LWLock optimization and
NUM_LOCK_PARTITIONS is out there, and I wondered about this.
-Jeremy
--
Jeremy Schneider
Performance Engineer
Amazon Web Services
From | Date | Subject | |
---|---|---|---|
Next Message | Vik Fearing | 2023-09-06 19:09:20 | Re: information_schema and not-null constraints |
Previous Message | Robert Haas | 2023-09-06 18:50:41 | Re: Can a role have indirect ADMIN OPTION on another role? |