From: | "Vance Maverick" <vmaverick(at)pgp(dot)com> |
---|---|
To: | <pgsql-jdbc(at)postgresql(dot)org> |
Subject: | fyi: reading large BYTEA |
Date: | 2007-08-19 06:09:30 |
Message-ID: | DAA9CBC6D4A7584ABA0B6BEA7EC6FC0B5D31FE@hq-exch01.corp.pgp.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
I'm working on reading large BYTEA fields from PostgreSQL 8.1. As
various people have pointed out
<http://archives.postgresql.org/pgsql-jdbc/2005-06/msg00138.php>,
ResultSet.getBinaryStream runs out of memory for large BYTEAs.
(PreparedStatement.setBinaryStream works fine.)
But there's a workaround: use the SUBSTRING function to read only a
chunk at a time, e.g.
SELECT SUBSTRING(my_field FROM 1 FOR 10000) FROM my_table WHERE ...
This was suggested by Karsten Hilbert
<http://archives.postgresql.org/pgsql-general/2005-01/msg00032.php>.
I've had no trouble getting this to work -- as it happens, the chunkwise
reading strategy is consistent with what I was doing in the surrounding
code anyway.
In discussion today
<http://archives.postgresql.org/pgsql-general/2007-08/msg01127.php>, Tom
Lane pointed out that
Recent releases will [perform better] if the column has been marked
SET STORAGE EXTERNAL (before storing anything in it...) See the
ALTER TABLE reference page.
I tried this, and with the test sizes I was running (15 Mb data, 250K
chunk), the difference was not significant. Reading was a bit faster,
but writing was considerably slower, adding up to a wash. Your mileage
may vary.
Vance
From | Date | Subject | |
---|---|---|---|
Next Message | Kalle Hallivuori | 2007-08-19 19:43:53 | Re: COPY <table> FROM STDIN BINARY |
Previous Message | Владимир Баранов | 2007-08-17 11:38:40 | COPY <table> FROM STDIN BINARY |