Concurrency-safe Replacing a Set of Rows (Without SERIALIZABLE)

From: Gavin Wahl <gwahl(at)fusionbox(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Concurrency-safe Replacing a Set of Rows (Without SERIALIZABLE)
Date: 2017-04-12 22:32:57
Message-ID: CAAAf5g-amPWT9MpTJCUZ61rnGNUmt8OQNPx2cT9=Xas00KsTkg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I have a table that stores user notifications:

CREATE TABLE notifications (
user_id INT,
type CHAR(1),
PRIMARY KEY (user_id, type)
);

When a user edits their notifications, I need to atomically replace the old set
with the new set. My first instinct is to do this:

BEGIN;
DELETE FROM notifications WHERE user_id = 1;
INSERT INTO notifications (user_id, type) VALUES (1, 'a'), (1, 'b');
COMMIT;

This of course doesn't work when two transactions run concurrently though --
one of them will get a unique constraint violation.

My next thought was to use upsert:

BEGIN;
DELETE FROM notifications WHERE user_id = 1;
INSERT INTO notifications (user_id, type) VALUES (1, 'a'), (1, 'b') ON
CONFLICT DO NOTHING;
COMMIT;

This doesn't give an error for concurrent transactions, but doesn't do
the right thing. Consider if one transaction runs to replace the set with
{'a', 'b'} and another runs with {'b', 'c'}. The result should either
be {'a', 'b'} or {'b', 'c'}, but they actually get merged together and
the user ends up with notifications {'a', 'b', 'c'}.

Is there any way to do this correctly without SERIALIZABLE transactions? It
would be nice to avoid having to retry transactions. Ideally I'd like to avoid
explicit locking as well.

Responses

Browse pgsql-general by date

  From Date Subject
Next Message David G. Johnston 2017-04-12 22:43:20 Re: SELECT FOR UPDATE violates READ COMMITTED isolation?
Previous Message Gavin Wahl 2017-04-12 22:14:08 SELECT FOR UPDATE violates READ COMMITTED isolation?