| From: | Sim Zacks <sim(at)compulab(dot)co(dot)il> |
|---|---|
| To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
| Subject: | Re: transaction confusion |
| Date: | 2006-09-18 08:31:26 |
| Message-ID: | 450E595E.6010407@compulab.co.il |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
> I think the reason you are seeing failures in the first function is
> that the initial DELETE is a no-op so it doesn't serialize anything,
> and then there is conflict when the two INSERTs proceed in parallel.
Here is a simple, reproducible example that delete doesn't cause it to use serial:
create table testserial(id serial primary key, val int);
create or replace function inserttest()returns int as
$$
begin
delete from testserial;
for i in 1..100000 loop
insert into testserial(val) values(i);
end Loop;
return 1;
end;
$$ language 'plpgsql';
select inserttest();
Now there are 100,000 records in the table.
Run the function from 2 different sessions at the same time and you will see that there are 200,000
records in the table and not 100,000 records.
I also tested with an update statement:
create or replace function inserttest()returns int as
$$
begin
update testserial set val=5 where val=1;
delete from testserial;
for i in 1..100000 loop
insert into testserial(val) values(i);
end Loop;
return 1;
end;
$$ language 'plpgsql';
When this function is run twice at the same time, it actually does run in serial and there is only
100,000 records in the table.
Tom Lane wrote:
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Martijn van Oosterhout | 2006-09-18 09:15:55 | Re: transaction confusion |
| Previous Message | Bernhard Weisshuhn | 2006-09-18 08:27:00 | Re: PostgreSQL slammed by PHP creator |