From: | Philip Warner <pjw(at)rhyme(dot)com(dot)au> |
---|---|
To: | Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: pg_dump |
Date: | 2001-03-17 11:58:34 |
Message-ID: | 3.0.5.32.20010317225834.01eafa20@mail.rhyme.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
At 17:36 17/03/01 +0900, Tatsuo Ishii wrote:
>I know that new pg_dump can dump out large objects. But what about
>pg_dumpall? Do we have to dump out a whole database cluster by using
>pg_dumpall then run pg_dump separetly to dump large objects?
That won't even work, since pg_dump won't dump BLOBs without dumping all
the tables in the database.
>That seems pain...
It is if you do not have individual database backup procedures; but
pg_dumpall uses the plain text dump format, which, without changes to
lo_import, can not restore binary data. If lo_import could load UUENCODED
data from STDIN, then maybe we could get it to work. Alternatively, we may
be able to put an option on pg_dumpall that will dump to one long script
file with embedded TAR archives, but I have not really looked at the option.
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.B.N. 75 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 0500 83 82 82 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/
From | Date | Subject | |
---|---|---|---|
Next Message | Jan Wieck | 2001-03-17 14:33:03 | Re: Performance monitor signal handler |
Previous Message | Philip Warner | 2001-03-17 09:49:44 | Re: Performance monitor signal handler |