From: | Shalini <shalini(at)saralweb(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Tuple concurrency issue in large objects |
Date: | 2019-12-06 09:26:25 |
Message-ID: | 610855df-f400-4cc3-6d81-85030ff05fa1@saralweb.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi all,
I am working on a project which allows multiple users to work on single
large text document. I am using lo_put to apply only the diff into the
large object without replacing it with a new lob. While working on it, I
encountered an error "Tuple concurrently updated".
The error can be reproduced with two psql clients.
Setup:
mydb=# create table text_docs(id serial primary key, data oid);
CREATE TABLE
mydb=# insert into text_docs(data) select lo_import('./upload.txt');
INSERT 0 1
mydb=# select * from text_docs;
id | data
----+---------
1 | 5810130
(1 rows)
Now, if we open two psql clients and execute the following commands:
Client 1:
mydb=# begin;
BEGIN
mydb=# select lo_put(5810130, 10, '\xaa');
UPDATE 1
Client 2:
mydb=# select lo_put(5810130, 10, '\xaa');
Client 1:
mydb=# commit;
COMMIT
Client 2:
mydb=# select lo_put(5810130, 10, '\xaa');
ERROR: tuple concurrently updated
Is there a workaround to this concurrency issue without creating a new
large object?
Regards
Shalini
From | Date | Subject | |
---|---|---|---|
Next Message | Zwettler Markus (OIZ) | 2019-12-06 09:50:26 | AW: archiving question |
Previous Message | İlyas Derse | 2019-12-06 09:00:38 | Insert Table from Execute String Query |