From: | David North <dtn(at)corefiling(dot)co(dot)uk> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | bytea columns and large values |
Date: | 2011-09-27 17:01:00 |
Message-ID: | 4E82014C.6070501@corefiling.co.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
My application uses a bytea column to store some fairly large binary
values (hundreds of megabytes).
Recently I've run into a problem as my values start to approach the 1GB
limit on field size:
When I write a 955MB byte array from Java into my table from JDBC, the
write succeeds and the numbers look about right:
testdb=# select count(*) from problem_table;
count
-------
1
(1 row)
testdb=# select pg_size_pretty(pg_total_relation_size('problem_table'));
pg_size_pretty
----------------
991 MB
(1 row)
However, any attempt to read this row back fails:
testdb=# select * from problem_table;
ERROR: invalid memory alloc request size 2003676411
The same error occurs when reading from JDBC (even using getBinaryStream).
Is there some reason why my data can be stored in <1GB but triggers the
allocation of 2GB of memory when I try to read it back? Is there any
setting I can change or any alternate method of reading I can use to get
around this?
Thanks,
--
David North, Software Developer, CoreFiling Limited
http://www.corefiling.com
Phone: +44-1865-203192
From | Date | Subject | |
---|---|---|---|
Next Message | Adam Tistler | 2011-09-27 17:15:36 | timeline X of the primary does not match recovery target timeline Y |
Previous Message | Rich Shepard | 2011-09-27 16:15:15 | Re: Quick-and-Dirty Data Entry with LibreOffice3? |