From: | cpt(at)novozymes(dot)com |
---|---|
To: | pgsql-bugs(at)postgresql(dot)org |
Subject: | BUG #13446: pg_dump fails with large tuples |
Date: | 2015-06-16 11:20:41 |
Message-ID: | 20150616112041.2735.95092@wrigleys.postgresql.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
The following bug has been logged on the website:
Bug reference: 13446
Logged by: CPT
Email address: cpt(at)novozymes(dot)com
PostgreSQL version: 9.3.5
Operating system: Linux, Ubuntu 12, 64-bit
Description:
It looks to me like pg_dump is limited to 1GB per row as a textual
representation.
# create table stringtest (test text);
CREATE TABLE
# insert into stringtest select repeat('A', (1024*2014*510));
INSERT 1
# alter table stringtest add test2 text;
ALTER TABLE
# update stringtest set test2 = test;
UPDATE 1
# \q
$
So far so good.... Now let's try to back this up using pg_dump:
$ pg_dump ... -t stringtest
...
pg_dump: Dumping the contents of table "stringtest" failed: PQgetResult()
failed.
pg_dump: Error message from server: ERROR: out of memory
DETAIL: Cannot enlarge string buffer containing 1051791361 bytes by
1051791360 more bytes.
pg_dump: The command was: COPY public.stringtest (test, test2) TO stdout;
This message then shows up in the server logs. It looks like maybe pg_dump
is limited to exactly 1GB textual representation?
From | Date | Subject | |
---|---|---|---|
Next Message | Xavier 12 | 2015-06-16 13:55:42 | Re: pg_xlog on a hot_stanby slave |
Previous Message | Guillaume Lelarge | 2015-06-16 10:41:06 | Re: pg_xlog on a hot_stanby slave |