From: | Heikki Linnakangas <hlinnaka(at)iki(dot)fi> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Peter Geoghegan <pg(at)heroku(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: LWLock deadlock and gdb advice |
Date: | 2015-07-29 12:14:23 |
Message-ID: | 55B8C39F.3070801@iki.fi |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 07/29/2015 03:08 PM, Andres Freund wrote:
> On 2015-07-29 14:55:54 +0300, Heikki Linnakangas wrote:
>> On 07/29/2015 02:39 PM, Andres Freund wrote:
>>> In an earlier email you say:
>>>> After the spinlock is released above, but before the LWLockQueueSelf() call,
>>>> it's possible that another backend comes in, acquires the lock, changes the
>>>> variable's value, and releases the lock again. In 9.4, the spinlock was not
>>>> released until the process was queued.
>>>
>>> But that's not a problem. The updater in that case will have queued a
>>> wakeup for all waiters, including WaitForVar()?
>>
>> If you release the spinlock before LWLockQueueSelf(), then no. It's possible
>> that the backend you wanted to wait for updates the variable's value before
>> you've queued up. Or even releases the lock, and it gets re-acquired by
>> another backend, before you've queued up.
>
> For normal locks the equivalent problem is solved by re-checking wether
> a conflicting lock is still held after enqueuing. Why don't we do the
> same here? Doing it that way has the big advantage that we can just
> remove the spinlocks entirely on platforms with atomic 64bit
> reads/writes.
Ah, ok, that should work, as long as you also re-check the variable's
value after queueing. Want to write the patch, or should I?
- Heikki
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2015-07-29 12:22:23 | Re: LWLock deadlock and gdb advice |
Previous Message | Andres Freund | 2015-07-29 12:08:14 | Re: LWLock deadlock and gdb advice |