From: | "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> |
---|---|
To: | "Robert Haas" <robertmhaas(at)gmail(dot)com> |
Cc: | "Greg Smith" <greg(at)2ndquadrant(dot)com>, "Marko Tiikkaja" <marko(dot)tiikkaja(at)cs(dot)helsinki(dot)fi>, "Boxuan Zhai" <bxzhai2010(at)gmail(dot)com>, "Greg Stark" <gsstark(at)mit(dot)edu>, <pgsql-hackers(at)postgresql(dot)org>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Martijn van Oosterhout" <kleptog(at)svana(dot)org> |
Subject: | Re: ask for review of MERGE |
Date: | 2010-10-25 19:15:14 |
Message-ID: | 4CC590F20200002500036DCF@gw.wicourts.gov |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> rhaas=# create table concurrent (x integer primary key);
> NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index
> "concurrent_pkey" for table "concurrent"
> CREATE TABLE
> rhaas=# insert into x values (1);
> rhaas=# begin;
> BEGIN
> rhaas=# insert into concurrent values (2);
> INSERT 0 1
>
> <switch to a different window>
>
> rhaas=# update concurrent set x=x where x=2;
> UPDATE 0
That surprised me. I would have thought that the INSERT would have
created an "in doubt" tuple which would block the UPDATE. What is
the reason for not doing so?
FWIW I did a quick test and REPEATABLE READ also lets this pass but
with the SSI patch SERIALIZABLE seems to cover this correctly,
generating a serialization failure where such access is involved in
write skew:
test=# create table concurrent (x integer primary key);
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index
"concurrent_pkey" for table "concurrent"
CREATE TABLE
test=# insert into concurrent select generate_series(1, 20000);
INSERT 0 20000
test=# begin isolation level serializable;
BEGIN
test=# insert into concurrent values (0);
INSERT 0 1
test=# update concurrent set x = 30001 where x = 30000;
UPDATE 0
<different session>
test=# begin isolation level serializable;
BEGIN
test=# insert into concurrent values (30000);
INSERT 0 1
test=# update concurrent set x = -1 where x = 0;
UPDATE 0
test=# commit;
ERROR: could not serialize access due to read/write dependencies
among transactions
HINT: The transaction might succeed if retried.
I'll need to add a test to cover this, because it might have broken
with one of the optimizations on my list, had you not point out this
behavior.
On the other hand:
<session 1>
test=# drop table concurrent ;
DROP TABLE
test=# create table concurrent (x integer primary key);
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index
"concurrent_pkey" for table "concurrent"
CREATE TABLE
test=# insert into concurrent select generate_series(1, 20000);
INSERT 0 20000
test=# begin isolation level serializable;
BEGIN
test=# insert into concurrent values (0);
INSERT 0 1
<session 2>
test=# begin isolation level serializable;
BEGIN
test=# select * from concurrent where x = 0;
x
---
(0 rows)
test=# insert into concurrent values (0);
<blocks>
<session 1>
test=# commit;
COMMIT
<session 2>
ERROR: duplicate key value violates unique constraint
"concurrent_pkey"
DETAIL: Key (x)=(0) already exists.
Anyway, I thought this might be of interest in terms of the MERGE
patch concurrency issues, since the SSI patch has been mentioned.
-Kevin
From | Date | Subject | |
---|---|---|---|
Next Message | Ray Stell | 2010-10-25 19:21:57 | Re: Postgres insert performance and storage requirement compared to Oracle |
Previous Message | Peter Eisentraut | 2010-10-25 19:11:16 | foreign keys for array/period contains relationships |