SUBSTRING performance for large BYTEA

From: "Vance Maverick" <vmaverick(at)pgpeng(dot)com>
To: <pgsql-general(at)postgresql(dot)org>
Subject: SUBSTRING performance for large BYTEA
Date: 2007-08-18 12:34:17
Message-ID: 000101c7e194$184ff090$a302d60a@corp.pgp.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I'm working on reading large BYTEA fields from PostgreSQL 8.1. (For legacy
reasons, it's unattractive to move them to large objects.) I'm using JDBC,
and as various people have pointed out
<http://archives.postgresql.org/pgsql-jdbc/2005-06/msg00138.php>, the
standard stream-style access method runs out of memory for large BYTEAs.

Karsten Hilbert mentions using SUBSTRING to read these BYTEA fields a chunk
at a time
<http://archives.postgresql.org/pgsql-general/2005-01/msg00032.php>. I've
tried this, and indeed it works. (Once I corrected for the 1-based indexing
;-))

My question is about performance in the postgres server. When I execute
"SELECT SUBSTRING (my_bytea FROM ? FOR ?) FROM my_table WHERE id = ?", does
it fetch the whole BYTEA into memory? Or does it access only the pages that
contain the requested substring?

Vance

Browse pgsql-general by date

  From Date Subject
Next Message Vance Maverick 2007-08-18 12:36:54 SUBSTRING performance for large BYTEA
Previous Message Robert Gravsjö 2007-08-18 12:04:46 Re: Partitioning