From: | "Jarmo Paavilainen" <netletter(at)comder(dot)com> |
---|---|
To: | "PostgreSQL Hackers" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | UNIQUE INDEX unaware of transactions (a spin of question) |
Date: | 2001-06-16 07:56:39 |
Message-ID: | 001501c0f639$dfc0d7e0$1501a8c0@telia.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
A bit theoretical question (sorry for spelling and maybe OT).
...
> > It seems that our current way of enforcing uniqueness knows nothing
> > about transactions ;(
...
> > create table t(i int4 primary key);
...
> > begin;
> > delete from t where i=1;
> > insert into t(i) values(1);
> > end;
> >
> > in a loop from two parallel processes in a loop then one of them will
> > almost instantaneously err out with
> >
> > ERROR: Cannot insert a duplicate key into unique index t_pkey
*I think* this is correct behaviour, ie all that one transaction does should
be visible to other transactions.
But then a question: How is this handled by PostgreSQL? (two parallel
threads, a row where t=1 allready exist):
begin; // << Thread 1
delete from t where i=1;
// Now thread 1 does a lot of other stuff...
// and while its working another thread starts doing its stuff
begin; // << Thread 2
insert into t(i) values(1);
commit; // << Thread 2 is done, and all should be swell
// What happens here ????????????
rollback; // << Thread 1 regrets its delete???????????
// Jarmo
From | Date | Subject | |
---|---|---|---|
Next Message | Trond Eivind =?iso-8859-1?q?Glomsr=F8d?= | 2001-06-16 13:41:37 | Re: postgres dies while doing vacuum analyze |
Previous Message | Guru Prasad | 2001-06-16 05:40:54 | Postgres |