From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Brodie Thiesfield <brofield+pgsql(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: How to prevent duplicate key error when two processes do DELETE/INSERT simultaneously? |
Date: | 2009-07-29 15:23:01 |
Message-ID: | 18823.1248880981@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Brodie Thiesfield <brofield+pgsql(at)gmail(dot)com> writes:
> Essentially, I have two processes connecting to a single PG database
> and simultaneously issuing the following statements:
> BEGIN;
> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
> DELETE FROM licence_properties WHERE key = xxx;
> INSERT INTO licence_properties ... values with key = xxx;
> COMMIT
You mean they both want to insert the same key?
> One of these processes is getting to the INSERT and failing with
> duplicate key error.
> ERROR: duplicate key value violates unique constraint
If they both insert the same key, this is what *must* happen. Surely
you don't expect both to succeed, or one to fail and not tell you.
> The DELETE should prevent this duplicate key error from occurring. I
> thought that the ISOLATION LEVEL SERIALIZABLE would fix this problem
> (being that the second process can see the INSERT from the first
> process after it has done the DELETE), but it doesn't.
I think you've got the effects of SERIALIZABLE backward, but in any
case SERIALIZABLE does not affect uniqueness checks. Unique is unique.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Sam Mason | 2009-07-29 15:23:46 | Re: integration of fulltext search in bytea/docs |
Previous Message | Radek Novotný | 2009-07-29 14:46:43 | integration of fulltext search in bytea/docs |