Re: BUG #15026: Deadlock using GIST index

From: Mark Scheffer <pg2401k(at)pinkwin(dot)com>
To: pgsql-bugs(at)postgresql(dot)org
Subject: Re: BUG #15026: Deadlock using GIST index
Date: 2018-02-12 07:38:33
Message-ID: 1518421113328-0.post@n3.nabble.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

Peter Geoghegan-4 wrote
> While I think that your complaint is a legitimate one, I'd say that
> it's very unlikely to be fixed. Ordinary (non-deferrable) unique
> constraints don't have this problem because they pessimistically lock
> the first leaf page the value could be on ahead of inserting. In
> contrast, exclusion constraints generally optimistically detect any
> conflict in a separate pass. That's how you can get concurrent
> insertions to wait on each other, rather than having a clear "winner"
> at the precise point that insertion occurs.

I was already afraid of this reading your post in pg-hackers (Jan 24th 2015,
"Moving ExecInsertIndexTuples and friends to new file") where comment was
added in sourcecode:

> FWIW, both Jeff Davis and Tom Lane were well aware of this issue back
> when exclusion constraints went in - it was judged to be acceptable at
> the time, which I agree with. I happened to discuss this with Jeff in
> New York recently. I agree that it should definitely be documented
> like this (and the fact that ordinary unique indexes are unaffected,
> too).

But how to work around this deadlock without requiring a more general lock,
like table lock or some other related object? I need guaranteed
transactional processing. In case of (this) deadlock, I attempted to catch
the deadlock and retry. A new issue popped up: if more then two sessions
compete for the lock, I get an infinite loop of deadlocks (cf. "Looking for
workaround to avoid deadlock when using exclusion constraint" post of Feb 4
2018).

Following code snippet appears to work if pg_sleep is larger than
deadlock_timeout value, but I have two problems with that:
- I need a deadlock timeout as small as possible (10ms might work, not
sure). This will result in many long wait message in log (which I want to
monitor...) plus higher CPU load.
- I'm not 100% sure this works in all cases.

DO $$
DECLARE
key_ locked.key%TYPE;
ctid_ locked.ctid%TYPE;
BEGIN
WHILE key_ IS NULL
LOOP
BEGIN
SELECT key, ctid FROM locked WHERE key = 'a' INTO STRICT key_,
ctid_;
EXCEPTION
WHEN no_data_found THEN
BEGIN
INSERT INTO locked(key) values('a') RETURNING key, ctid INTO
key_, ctid_;
EXCEPTION
WHEN exclusion_violation THEN NULL;
WHEN deadlock_detected THEN
RAISE WARNING 'Deadlock!! (t=%)', clock_timestamp();
PERFORM pg_sleep(1.1); -- deadlock_timeout = 1s
END;
END;
END LOOP;
RAISE WARNING 'key=%,ctid=%', key_, ctid_;
END
$$;

Execute DO block above from at least 3 sessions after inserting a row from
4th session, and rolling back after all sessions are started:

DROP TABLE IF EXISTS locked;
CREATE TABLE locked
(
key text NOT NULL,
EXCLUDE USING gist (key WITH =)
);
BEGIN;
INSERT INTO locked(key) values('a');
-- Start 3+ DO blocks
ROLLBACK;

--
Sent from: http://www.postgresql-archive.org/PostgreSQL-bugs-f2117394.html

In response to

Browse pgsql-bugs by date

  From Date Subject
Next Message PG Bug reporting form 2018-02-12 08:46:38 BUG #15059: Ошибка
Previous Message Pavan Teja 2018-02-11 16:14:20 Re: Fwd: postgresql performance question