From: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Simon Riggs <simon(at)2ndquadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Rod Taylor <rod(dot)taylor(at)gmail(dot)com> |
Subject: | Re: LOCK TABLE .. DEFERRABLE |
Date: | 2016-12-02 04:51:30 |
Message-ID: | CANP8+j+9iwg1AZqLfvs-CLzokM3QAmHty2MXNxUcbR0zPcPmmQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 15 September 2016 at 18:51, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Tue, Sep 6, 2016 at 6:04 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>> On 1 September 2016 at 21:28, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>>> So the only way to handle multiple locks is to do this roughly the way
>>> Rod suggests.
>>
>> I've just been reading the VACUUM code and it turns out that we
>> already use Rod's mechanism internally. So on that basis it seems fine
>> to support this as a useful user-level feature. If there is a better
>> way of doing it, then that can be added later.
>>
>> My proposed changes to this patch are these
>>
>> 1. Rename this WAIT PATIENTLY, which is IMHO a better description of
>> what is being requested. Bikeshedding welcome.
>>
>> 2. Introduce a new API call LockTablePatiently() that returns bool. So
>> its usage is similar to ConditionalLockTable(), the only difference is
>> you supply some other wait parameters with it. This isolates the
>> internal mechanism from the usage, so allows us to more easily support
>> any fancy new way of doing this we think of later.
>>
>> 3. Use LockTablePatiently() within lockcmds.c where appropriate
>>
>> 4. Replace the code for waiting in VACUUM with the new call to
>> LockTablePatiently()
>>
>> So I see this as 2 patches: 1) new API and make VACUUM use new API, 2)
>> Rod's LOCK TABLE patch
>>
>> First patch attached, requires also lock by Oid. If we agree, Rod,
>> please update your patch to match?
>
> Aside from the fact that polling is generally inefficient and wasteful
> of system resources, this allows for undetected deadlocks. Consider:
>
> S1: LOCK TABLE A;
> S2: LOCK TABLE B;
> S1: LOCK TABLE B; -- blocks
> S2: LOCK TABLE A PATIENTLY; -- retries forever
Hmmm
> Livelock might be possible, too.
>
> I think it would be better to think harder about what would be
> required to implement this natively in the lock manager. Suppose we
> add a flag to each PROCLOCK which, if true, indicates that the lock
> request is low priority. Also, we add a counter to each LOCK
> indicating the number of pending low-priority lock requests. When
> LockAcquireExtended reaches this code here...
>
> if (lockMethodTable->conflictTab[lockmode] & lock->waitMask)
> status = STATUS_FOUND;
> else
> status = LockCheckConflicts(lockMethodTable, lockmode,
>
> lock, proclock);
>
> ...we add an additional check to the upper branch: if the number of
> low-priority waiters is not 0, then we walk the wait queue; if all
> waiters that conflict with the requested lock mode are low-priority,
> then we set status = STATUS_OK. So, new lock requests refuse to queue
> behind low-priority lock waiters.
Well, that's pretty much the exact design I mentioned earlier.
> Is that good enough to implement the requested behavior, or do we need
> to do more?
The only problem is that Rod's request was to be able to lock multiple
tables in one statement, which cannot then be done that way.
But there are problems with Rod's approach, so I suggest we ditch that
now and I'll implement the single table lock approach that you, me and
Andres preferred.
I'd rather have something sweet with one table than something crappy
with multiple tables.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Haribabu Kommi | 2016-12-02 04:56:29 | Re: pgbench more operators & functions |
Previous Message | Michael Paquier | 2016-12-02 04:45:29 | Re: Random number generation, take two |