Re: Large Objects in serializable transaction question

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: " Andreas Schönbach " <andreasschoenbach(at)web(dot)de>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Large Objects in serializable transaction question
Date: 2003-07-15 14:15:21
Message-ID: 22383.1058278521@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

=?iso-8859-1?Q? "Andreas=20Sch=F6nbach" ?= <andreasschoenbach(at)web(dot)de> writes:
> I have a testprogram (using libpq) reading data from a cursor and large objects according to the result of the cursor. The cursor is opened in a serializable transaction.
> Just for test reasons I know tried the following:
> I started the test program that reads the data from the cursor and that reads the large objects according to the result of the fetch. While the test was running I now was dropping all large objects in a parallel session. Since I am using a serializable transaction in the test program I still should be able to read all the large objects, even if I drop them in a parallel session. But it does not work. I get an error, that the large object can't be opened.

Yeah. The large object operations use SnapshotNow (effectively
read-committed) rather than looking at the surrounding transaction's
snapshot. This is a bug IMHO, but no one's got round to working on
it. (It's not entirely clear how the LO functions could access the
appropriate snapshot.)

regards, tom lane

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Shridhar Daithankar 2003-07-15 14:23:37 Re: Billions of records?
Previous Message Andrew Sullivan 2003-07-15 14:11:50 Re: insert bug