From: | Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Contention preventing locking |
Date: | 2018-03-01 07:52:49 |
Message-ID: | 863682b8-3680-06e2-b73a-3edd0c46d9ba@postgrespro.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 28.02.2018 16:32, Amit Kapila wrote:
> On Mon, Feb 26, 2018 at 8:26 PM, Konstantin Knizhnik
> <k(dot)knizhnik(at)postgrespro(dot)ru> wrote:
>> On 26.02.2018 17:20, Amit Kapila wrote:
>>> Can you please explain, how it can be done easily without extra tuple
>>> locks? I have tried to read your patch but due to lack of comments,
>>> it is not clear what you are trying to achieve. As far as I can see
>>> you are changing the locktag passed to LockAcquireExtended by the
>>> first waiter for the transaction. How will it achieve the serial
>>> waiting protocol (queue the waiters for tuple) for a particular tuple
>>> being updated?
>>>
>> The idea of transaction lock chaining was very simple. I have explained it
>> in the first main in this thread.
>> Assumed that transaction T1 has updated tuple R1.
>> Transaction T2 also tries to update this tuple and so waits for T1 XID.
>> If then yet another transaction T3 also tries to update R1, then it should
>> wait for T2, not for T1.
>>
> Isn't this exactly we try to do via tuple locks
> (heap_acquire_tuplock)? Currently, T2 before waiting for T1 will
> acquire tuple lock on R1 and T3 will wait on T2 (not on T-1) to
> release the tuple lock on R-1 and similarly, the other waiters should
> form a queue and will be waked one-by-one. After this as soon T2 is
> waked up, it will release the lock on tuple and will try to fetch the
> updated tuple. Now, releasing the lock on tuple by T-2 will allow T-3
> to also proceed and as T-3 was supposed to wait on T-1 (according to
> tuple satisfies API), it will immediately be released and it will also
> try to do the same work as is done by T-2. One of those will succeed
> and other have to re-fetch the updated-tuple again.
Yes, but two notices:
1. Tuple lock is used inside heap_* functions. But not in
EvalPlanQualFetch where transaction lock is also used.
2. Tuple lock is hold until the end of update, not until commit of the
transaction. So other transaction can receive conrol before this
transaction is completed. And contention still takes place.
Contention is reduced and performance is increased only if locks (either
tuple lock, either xid lock) are hold until the end of transaction.
Unfortunately it may lead to deadlock.
My last attempt to reduce contention was to replace shared lock with
exclusive in XactLockTableWait and removing unlock from this function.
So only one transaction can get xact lock and will will hold it until
the end of transaction. Also tuple lock seems to be not needed in this
case. It shows better performance on pgrw test but on YCSB benchmark
with workload A (50% of updates) performance was even worser than with
vanilla postgres. But was is wost of all - there are deadlocks in
pgbench tests.
> I think in this whole process backends may need to wait multiple times
> either on tuple lock or xact lock. It seems the reason for these
> waits is that we immediately release the tuple lock (acquired by
> heap_acquire_tuplock) once the transaction on which we were waiting is
> finished. AFAICU, the reason for releasing the tuple lock immediately
> instead of at end of the transaction is that we don't want to
> accumulate too many locks as that can lead to the unbounded use of
> shared memory. How about if we release the tuple lock at end of the
> transaction unless the transaction acquires more than a certain
> threshold (say 10 or 50) of such locks in which case we will fall back
> to current strategy?
>
Certainly, I have tested such version. Unfortunately it doesn't help.
Tuple lock is using tuple TID. But once transaction has made the update,
new version of tuple will be produced with different TID and all new
transactions will see this version, so them will not notice this lock at
all. This is why my first attempt to address content was to replace TID
lock with PK (primary key) lock. And it really helps to reduce
contention and degradation of performance with increasing number of
connections. But it is not so easy to correctly extract Pk in all cases.
--
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
From | Date | Subject | |
---|---|---|---|
Next Message | Andrey Borodin | 2018-03-01 07:56:35 | Re: Online enabling of checksums |
Previous Message | Tsunakawa, Takayuki | 2018-03-01 07:49:06 | RE: [bug fix] pg_rewind creates corrupt WAL files, and the standby cannot catch up the primary |