From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | cjwhite(at)cisco(dot)com |
Cc: | pgsql-jdbc(at)postgresql(dot)org, pgsql-admin(at)postgresql(dot)org |
Subject: | Re: [JDBC] Problems with Large Objects using Postgres 7.2.1 |
Date: | 2003-04-09 19:20:13 |
Message-ID: | 9428.1049916013@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin pgsql-jdbc |
"Chris White" <cjwhite(at)cisco(dot)com> writes:
> Looking at our code further, the actual code writes the large object commits
> it, opens the large object updates the header of the large object (first 58
> bytes) with some length info using seeks, then writes and commits the object
> again, before updating and committing the associated tables. The data I saw
> in the exported file was the header info without the updates for the length
> info i.e. after the first commit!!
Oh, that's interesting. I wonder whether you could be running into some
variant of this issue:
http://archives.postgresql.org/pgsql-hackers/2002-05/msg00875.php
I looked a little bit at fixing this, but wasn't sure how to get the
appropriate snapshot passed to the LO functions --- the global
QuerySnapshot might not be the right thing, but then what is? Also,
what if a transaction opens multiple LO handles for the same object
--- should they be able to see each others' updates? (I'm not sure
we could prevent it, so this may be moot.)
BTW what do you mean exactly by "commit" above? There is no notion of
committing a large object separately from committing a transaction.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-04-09 19:28:28 | Re: Index not used, |
Previous Message | Barry Lind | 2003-04-09 19:00:21 | Re: [JDBC] Index not used, |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-04-09 19:28:28 | Re: Index not used, |
Previous Message | Nic Ferrier | 2003-04-09 19:02:27 | Re: Callable Statements |