From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Heikki Linnakangas <hlinnaka(at)iki(dot)fi> |
Cc: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Peter Geoghegan <pg(at)heroku(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: LWLock deadlock and gdb advice |
Date: | 2015-07-29 12:08:14 |
Message-ID: | 20150729120814.GE10043@alap3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2015-07-29 14:55:54 +0300, Heikki Linnakangas wrote:
> On 07/29/2015 02:39 PM, Andres Freund wrote:
> >In an earlier email you say:
> >>After the spinlock is released above, but before the LWLockQueueSelf() call,
> >>it's possible that another backend comes in, acquires the lock, changes the
> >>variable's value, and releases the lock again. In 9.4, the spinlock was not
> >>released until the process was queued.
> >
> >But that's not a problem. The updater in that case will have queued a
> >wakeup for all waiters, including WaitForVar()?
>
> If you release the spinlock before LWLockQueueSelf(), then no. It's possible
> that the backend you wanted to wait for updates the variable's value before
> you've queued up. Or even releases the lock, and it gets re-acquired by
> another backend, before you've queued up.
For normal locks the equivalent problem is solved by re-checking wether
a conflicting lock is still held after enqueuing. Why don't we do the
same here? Doing it that way has the big advantage that we can just
remove the spinlocks entirely on platforms with atomic 64bit
reads/writes.
Greetings,
Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2015-07-29 12:14:23 | Re: LWLock deadlock and gdb advice |
Previous Message | Sawada Masahiko | 2015-07-29 12:03:39 | Re: Support for N synchronous standby servers - take 2 |