From: | "Joe Conway" <joseph(dot)conway(at)home(dot)com> |
---|---|
To: | "Andrew Gould" <andrewgould(at)yahoo(dot)com>, "David Ford" <david(at)blue-labs(dot)org>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Problem w/ dumping huge table and no disk space |
Date: | 2001-09-07 22:23:20 |
Message-ID: | 010201c137eb$b32d0980$0705a8c0@jecw2k1 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> Have you tried dumping individual tables separately
> until it's all done?
>
> I've never used to -Z option, so I can't compare its
> compression to piping a pg_dump through gzip.
> However, this is how I've been doing it:
>
> pg_dump db_name | gzip -c > db_name.gz
>
> I have a 2.2 Gb database that gets dumped/compressed
> to a 235 Mb file.
>
> Andrew
Another idea which you might try is run pg_dumpall from a different host
(with ample space) using the -h and -U options.
HTH,
Joe
Usage:
pg_dumpall [ options... ]
Options:
-c, --clean Clean (drop) schema prior to create
-g, --globals-only Only dump global objects, no databases
-h, --host=HOSTNAME Server host name
-p, --port=PORT Server port number
-U, --username=NAME Connect as specified database user
-W, --password Force password prompts (should happen
automatically)
Any extra options will be passed to pg_dump. The dump will be written
to the standard output.
From | Date | Subject | |
---|---|---|---|
Next Message | Chris Bowlby | 2001-09-07 22:27:13 | Re: Great Bridge ceases operations |
Previous Message | Brett Schwarz | 2001-09-07 22:04:50 | Re: Problem w/ dumping huge table and no disk space |