From: | Michael Brown <mbrown(at)fensystems(dot)co(dot)uk> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | PL/pgSQL infinite loop in "UPDATE/INSERT" example |
Date: | 2009-08-25 17:12:05 |
Message-ID: | 200908251812.06782.mbrown@fensystems.co.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
I believe that Example 38-2 ("Exceptions with UPDATE/INSERT") in the PL/pgSQL
documentation suffers from a race condition leading to a potential infinite
loop when running at isolation level SERIALIZABLE.
Here's the relevant code, for reference (copied straight from the 8.4
documentation):
CREATE FUNCTION merge_db(key INT, data TEXT) RETURNS VOID AS $$
BEGIN
LOOP
-- first try to update the key
UPDATE db SET b = data WHERE a = key;
IF found THEN RETURN; END IF;
-- not there, so try to insert the key
-- if someone else inserts the same key concurrently,
-- we could get a unique-key failure
BEGIN
INSERT INTO db(a,b) VALUES (key, data);
RETURN;
EXCEPTION WHEN unique_violation THEN
-- do nothing, and loop to try the UPDATE again
END;
END LOOP;
END; $$ LANGUAGE plpgsql;
The problem scenario is:
T1: BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;
T1: SELECT merge_db(1,'foo');
T1: (in merge_db) UPDATE db SET b='foo' WHERE a=1;
T1: (in merge_db) --- not found, so proceed to the INSERT block
T2: BEGIN; SELECT merge_db(1,'bar'); COMMIT; -- succeeds
T1: (in merge_db) INSERT INTO db(a,b) VALUES(1,'foo');
T1: (in merge_db) --- unique_violation exception occurs
T1: (in merge_db) --- loops
T1: (in merge_db) UPDATE db SET b='foo' WHERE a=1;
T1: (in merge_db) --- not found, so proceed to the INSERT block
T1: (in merge_db) INSERT INTO db(a,b) VALUES(1,'foo');
T1: (in merge_db) --- unique_violation exception occurs
T1: (in merge_db) --- loops indefinitely
The underlying cause of the problem is that the row added by T2 is not visible
to T1's UPDATE, but is visible to T1's INSERT via the uniqueness constraint.
When running at isolation level READ COMMITTED, this problem does not occur
since the row added by T2 will be visible when T1 retries its UPDATE.
I'm not sure what the best solution would be. Arguably, merge_db() should
throw a serialization_failure exception as soon as it encounters the
unique_violation, but only if it is running at isolation level SERIALIZABLE.
(This seems too manual, though.) Maybe the UPDATE should be throwing a
serialization_failure when it finds rows matching the WHERE clause that
cannot be locked for update (even though the rows would not be visible in a
SELECT)? (This seems as though it could have undesirable other consequences,
though.)
Any suggestions? Is there a clean way to fix this, or a better approach to
implementing "UPDATE/INSERT", or should the documentation be updated to
say "this only works when using READ COMMITTED"?
Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Vick Khera | 2009-08-25 18:02:26 | Re: How to simulate crashes of PostgreSQL? |
Previous Message | Alvaro Herrera | 2009-08-25 17:09:34 | Re: How to simulate crashes of PostgreSQL? |