From: | robert engels <rengels(at)ix(dot)netcom(dot)com> |
---|---|
To: | Kris Jurka <jurka(at)ejurka(dot)com> |
Cc: | pgsql-jdbc(at)postgresql(dot)org |
Subject: | Re: BLOB is read into memory instead of streaming (bug?) |
Date: | 2008-05-01 23:49:59 |
Message-ID: | 5C716B3D-2731-4C8C-81D0-74F12F81DDAF@ix.netcom.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
That's good to know.
The spec allows BLOBs to be read using getBytes() and getBinaryStream
() as well.
getBinaryStream should allow bytea to be read without allocating an
array to hold all of the data.
BUT, the low-level db protocol would need to support reading the
column in chunks.
On May 1, 2008, at 6:44 PM, Kris Jurka wrote:
>
>
> robert engels wrote:
>> This seems like a very bad impl - at least for JDBC.
>> Why are the details of this access not hidden in the JDBC driver?
>> The column type is the only thing that a user should be concerned
>> with.
>> Why would someone want to code proprietary Postgres code just to
>> access BLOBs?
>> The JDBC blob API is very good. Using either the BLOB/locator
>> interface or the getInputStream();
>
> I think you've misunderstood me. The documentation shows using a
> proprietary API, but get/setBlob works just fine. I pointed to the
> documentation because it explains some of the differences between
> the bytea and large object datatypes. It's really that the
> documentation needs an additional example for the standard blob usage.
>
> Kris Jurka
>
From | Date | Subject | |
---|---|---|---|
Next Message | Sergi Vera | 2008-05-05 06:56:17 | Re: invalid message format and I/O error while comunicating with backend |
Previous Message | Kris Jurka | 2008-05-01 23:44:34 | Re: BLOB is read into memory instead of streaming (bug?) |