From: | David Rowley <david(dot)rowley(at)2ndquadrant(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Inadequate executor locking of indexes |
Date: | 2018-11-23 04:41:26 |
Message-ID: | CAKJS1f9XZ225fZicuPbi_0CLRN64B8+bhxK-hEAgZZHUfzmKZQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, 8 Nov 2018 at 13:14, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
> I discovered that it's possible to trigger relation_open's new assertion
> about having a lock on the relation by the simple expedient of running
> the core regression tests with plan_cache_mode = force_generic_plan.
> There are several things we might do to fix this:
>
> 1. Drop the "operation != CMD_DELETE" condition from the above-quoted bit
> in ExecInitModifyTable. We might be forced into that someday anyway if
> we want to support non-heap-style tables, since most other peoples'
> versions of indexes do want to know about deletions.
>
> 2. Drop the target-table optimization from nodeIndexscan.c and friends,
> and just always open the scan index with AccessShareLock. (BTW, it's
> a bit inconsistent that these nodes don't release their index locks
> at the end; ExecCloseIndices does.)
>
> 3. Teach nodeIndexscan.c and friends about the operation != CMD_DELETE
> exception, so that they get the lock for themselves in that case. This
> seems pretty ugly/fragile, but it's about the only option that won't end
> in adding index lock-acquisition overhead in some cases. (But given the
> planner's behavior, it's not clear to me how often that really matters.)
Since I missed this and only discovered this was a problem when
someone else reported it today, and since I already did my own
analysis separately in [1], then my vote is for #2. For partitioned
table updates, there may be many result relations which can cause
ExecRelationIsTargetRelation() to become very slow, in such a case any
additional redundant lock would be cheap by comparison.
Ideally, the locking code would realise we already hold a stronger
lock and skip the lock, but I don't see how that's realistically
possible without probing the hash table for all stronger lock types
first, which would likely damage the performance of locking.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Munro | 2018-11-23 04:45:18 | Re: Refactoring the checkpointer's fsync request queue |
Previous Message | Tom Lane | 2018-11-23 04:30:33 | Re: Hitting CheckRelationLockedByMe() ASSERT with force_generic_plan |