From: | "John T(dot) Dow" <john(at)johntdow(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Dump/restore with bad data and large objects |
Date: | 2008-08-25 16:58:00 |
Message-ID: | 200808251658.m7PGwX45088951@web2.nidhog.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Tom
My mistake in not realizing that 8.1 and later can dump large objects in the plain text format. I guess when searching for answers to a problem, the posted information doesn't always specify the version. So, sorry about that.
But the plain text format still has serious problems in that the generated file is large for byte arrays and large objects, there is no ability to selectively restore a table, and bad data still isn't detected until you try to restore.
Or did I miss something else?
John
PS: Yes, I know you can pipe the output from pg_dumpall into an archiver, but it's my understanding that the binary data is output in an inefficient format so even if zipped, the resulting file would be significantly larger than the custom format.
On Mon, 25 Aug 2008 12:14:41 -0400, Tom Lane wrote:
>"John T. Dow" <john(at)johntdow(dot)com> writes:
>> If you dump in plain text format, you can at least inspect the dumped
>> data and fix it manually or with iconv. But the plain text
>> format doesn't support large objects (again, not nice).
>
>It does in 8.1 and later ...
>
>> Also, neither of these methods gets information such as the roles,
>
>Use pg_dumpall.
>
> regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2008-08-25 17:02:37 | Re: SERIAL datatype |
Previous Message | Ivan Sergio Borgonovo | 2008-08-25 16:48:17 | Re: playing with catalog tables limits? dangers? was: seq bug 2073 and time machine |