From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru> |
Cc: | Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Contention preventing locking |
Date: | 2018-02-28 13:32:47 |
Message-ID: | CAA4eK1JSpP+puQVM3jgpK8rwtxtJBj3b6A44LVZqkFxr_cO2gA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Feb 26, 2018 at 8:26 PM, Konstantin Knizhnik
<k(dot)knizhnik(at)postgrespro(dot)ru> wrote:
>
> On 26.02.2018 17:20, Amit Kapila wrote:
>>
>> Can you please explain, how it can be done easily without extra tuple
>> locks? I have tried to read your patch but due to lack of comments,
>> it is not clear what you are trying to achieve. As far as I can see
>> you are changing the locktag passed to LockAcquireExtended by the
>> first waiter for the transaction. How will it achieve the serial
>> waiting protocol (queue the waiters for tuple) for a particular tuple
>> being updated?
>>
> The idea of transaction lock chaining was very simple. I have explained it
> in the first main in this thread.
> Assumed that transaction T1 has updated tuple R1.
> Transaction T2 also tries to update this tuple and so waits for T1 XID.
> If then yet another transaction T3 also tries to update R1, then it should
> wait for T2, not for T1.
>
Isn't this exactly we try to do via tuple locks
(heap_acquire_tuplock)? Currently, T2 before waiting for T1 will
acquire tuple lock on R1 and T3 will wait on T2 (not on T-1) to
release the tuple lock on R-1 and similarly, the other waiters should
form a queue and will be waked one-by-one. After this as soon T2 is
waked up, it will release the lock on tuple and will try to fetch the
updated tuple. Now, releasing the lock on tuple by T-2 will allow T-3
to also proceed and as T-3 was supposed to wait on T-1 (according to
tuple satisfies API), it will immediately be released and it will also
try to do the same work as is done by T-2. One of those will succeed
and other have to re-fetch the updated-tuple again.
I think in this whole process backends may need to wait multiple times
either on tuple lock or xact lock. It seems the reason for these
waits is that we immediately release the tuple lock (acquired by
heap_acquire_tuplock) once the transaction on which we were waiting is
finished. AFAICU, the reason for releasing the tuple lock immediately
instead of at end of the transaction is that we don't want to
accumulate too many locks as that can lead to the unbounded use of
shared memory. How about if we release the tuple lock at end of the
transaction unless the transaction acquires more than a certain
threshold (say 10 or 50) of such locks in which case we will fall back
to current strategy?
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Grigory Smolkin | 2018-02-28 13:39:05 | Re: Function to track shmem reinit time |
Previous Message | Michael Banck | 2018-02-28 13:28:41 | Re: [PoC PATCH] Parallel dump to /dev/null |