From: | Kevin Grittner <kgrittn(at)ymail(dot)com> |
---|---|
To: | Ludovic POLLET <ludovic(dot)pollet(at)kurmi-software(dot)com>, "pgsql-jdbc(at)postgresql(dot)org" <pgsql-jdbc(at)postgresql(dot)org> |
Subject: | Re: TR: [BUGS] BUG #8842: lo_open/fastpath transaction inconsistency |
Date: | 2014-05-05 13:54:57 |
Message-ID: | 1399298097.47090.YahooMailNeo@web122305.mail.ne1.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs pgsql-jdbc |
Ludovic POLLET <ludovic(dot)pollet(at)kurmi-software(dot)com> wrote:
> We are facing a bug where our application cannot read a Lob
> because of concurrent removal. However, we are using the
> serializable isolation level and a correct transaction scope.
> We're using the latest JDBC driver snapshot (and hibernate but
> that does not really matter).
>
> The problematic use case is basically a thread that read the LOB
> while another thread replace it with a new one and release the
> old one. In this case, the reader will get a "large object xxx
> does not exist".
>
> From what I understand, it is due to the opening of the large
> object that is always done in read/write mode and thus behave as
> a read commited one (as written in the doc).
>
> This behaviour is at least very surprising especially when
> working in SERIALIZABLE concurrency level.
>
> The attached patch implements the following changes:
> - Opening the lob in read mode until a write function is called.
> In such a case, the LOB will be reopened in read/write mode. It
> totally solves my problem since our application does not modify
> Lob, but creates new ones indeed.
> - Deferring the first lo_open until a Blob function is really
> called (hibernate seems to be creating blob even if it does not
> even read them...)
>
> What do you think about these behaviors ?
As one of the developers of the implementation of serializable
transactions used by PostgreSQL, I remember that we needed to
exclude large objects from the transaction serialization because it
basically did not follow transactional semantics in any other
regard. The only workaround I can think to suggest is to only
insert and delete large objects -- never update them, and rely on
the serialization of modifications to the OID columns which
*reference* the large objects. Always add the large object before
setting a reference to it, and always eliminate references to a
large object before deleting it. A crash at the wrong time could
leave a large object which is unreferenced, so you might want to
look at the vacuumlo executable:
http://www.postgresql.org/docs/current/static/vacuumlo.html
Unfortunately, I don't know how that approach interacts with
Hibernate or its use of the JDBC driver.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Nick Rupley | 2014-05-05 17:09:54 | Re: BUG #10189: Limit in 9.3.4 no longer works when ordering using a composite multi-type index |
Previous Message | soobia.arshad | 2014-05-05 08:04:58 | BUG #10223: Linking postgresql 64 bit to sql server 64 bit |
From | Date | Subject | |
---|---|---|---|
Next Message | Jan-Peter Seifert | 2014-05-07 15:45:21 | temp table problem due to thread concurrency ( ERROR: could not open file "base/ ) ? |
Previous Message | Tomonari Katsumata | 2014-04-26 04:58:39 | Re: About binaryTransfer. |