| From: | "Vance Maverick" <vmaverick(at)pgp(dot)com> |
|---|---|
| To: | <pgsql-general(at)postgresql(dot)org> |
| Subject: | SUBSTRING performance for large BYTEA |
| Date: | 2007-08-18 12:36:54 |
| Message-ID: | DAA9CBC6D4A7584ABA0B6BEA7EC6FC0B5D31F8@hq-exch01.corp.pgp.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
I'm working on reading large BYTEA fields from PostgreSQL 8.1. (For
legacy reasons, it's unattractive to move them to large objects.) I'm
using JDBC, and as various people have pointed out
<http://archives.postgresql.org/pgsql-jdbc/2005-06/msg00138.php>, the
standard stream-style access method runs out of memory for large BYTEAs.
Karsten Hilbert mentions using SUBSTRING to read these BYTEA fields a
chunk at a time
<http://archives.postgresql.org/pgsql-general/2005-01/msg00032.php>.
I've tried this, and indeed it works. (Once I corrected for the 1-based
indexing ;-))
My question is about performance in the postgres server. When I execute
"SELECT SUBSTRING (my_bytea FROM ? FOR ?) FROM my_table WHERE id = ?",
does it fetch the whole BYTEA into memory? Or does it access only the
pages that contain the requested substring?
Vance
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Ron Johnson | 2007-08-18 14:53:47 | Re: Blobs in Postgresql |
| Previous Message | Vance Maverick | 2007-08-18 12:34:17 | SUBSTRING performance for large BYTEA |