From: | Albe Laurenz <laurenz(dot)albe(at)wien(dot)gv(dot)at> |
---|---|
To: | 'Balázs Zsoldos *EXTERN*' <balazs(dot)zsoldos(at)everit(dot)biz>, List <pgsql-jdbc(at)postgresql(dot)org> |
Subject: | Re: Concurrent read and write access of LargeObject via getBlob() can raise exception |
Date: | 2015-08-19 07:28:30 |
Message-ID: | A737B7A37273E048B164557ADEF4A58B50F93953@ntex2010i.host.magwien.gv.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
Balázs Zsoldos wrote:
> I created a table with the following fields:
>
> * blob_id: bigint / primary key, auto increment
> * blob: oid / a pointer to a large object
>
> I created a trigger that unlinks the largeobject if a record is deleted from this table.
>
> If I
>
> * select a record from my table and get the ResultSet instance
> * parallel, I delete the blob within another transaction
> * call resultSet.getBlob(1).getBinaryStream();
>
> I get the following exception:
>
> Caused by: org.postgresql.util.PSQLException: ERROR: large object 97664 does not exist
> at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2270)
> at org.postgresql.core.v3.QueryExecutorImpl.receiveFastpathResult(QueryExecutorImpl.java:672)
> at org.postgresql.core.v3.QueryExecutorImpl.fastpathCall(QueryExecutorImpl.java:501)
> at org.postgresql.fastpath.Fastpath.fastpath(Fastpath.java:109)
> at org.postgresql.fastpath.Fastpath.fastpath(Fastpath.java:156)
> at org.postgresql.fastpath.Fastpath.getInteger(Fastpath.java:168)
> at org.postgresql.largeobject.LargeObject.<init>(LargeObject.java:106)
> at org.postgresql.largeobject.LargeObject.<init>(LargeObject.java:123)
> at org.postgresql.largeobject.LargeObject.copy(LargeObject.java:128)
> at org.postgresql.jdbc4.AbstractJdbc4Blob.getBinaryStream(AbstractJdbc4Blob.java:26)
> at org.everit.blobstore.jdbc.internal.StreamBlobChannel.read(StreamBlobChannel.java:97)
> at org.everit.blobstore.jdbc.internal.JdbcBlobReader.read(JdbcBlobReader.java:128)
>
> For me that means that it is impossible to be sure that between getting a record from my table and
> getting the actual content of the blob, the content will be still the same as when I selected the blob
> record.
>
> I guess I can use safely the table only if I select the record with FOR SHARE.
Another, maybe better, option would be to start a transaction with isolation level
REPEATABLE READ and do your SELECT within that transaction.
That way you would not block others, but the large object would still be visible
for you even if a later transaction deleted it.
Yours,
Laurenz Albe
From | Date | Subject | |
---|---|---|---|
Next Message | ''Victor Wagner *EXTERN*' *EXTERN*' | 2015-08-19 07:44:17 | Re: Proposal: Implement failover on libpq connect level. |
Previous Message | Amit Kapila | 2015-08-19 07:25:15 | Re: Proposal: Implement failover on libpq connect level. |