From: | "Andrew L(dot) Gould" <algould(at)datawok(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | dealing with file size when archiving databases |
Date: | 2005-06-21 02:28:51 |
Message-ID: | 200506202128.51463.algould@datawok.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I've been backing up my databases by piping pg_dump into gzip and
burning the resulting files to a DVD-R. Unfortunately, FreeBSD has
problems dealing with very large files (>1GB?) on DVD media. One of my
compressed database backups is greater than 1GB; and the results of a
gzipped pg_dumpall is approximately 3.5GB. The processes for creating
the iso image and burning the image to DVD-R finish without any
problems; but the resulting file is unreadable/unusable.
My proposed solution is to modify my python script to:
1. use pg_dump to dump each database's tables individually, including
both the database and table name in the file name;
3. use 'pg_dumpall -g' to dump the global information; and
4. burn the backup directories, files and a recovery script to DVD-R.
The script will pipe pg_dump into gzip to compress the files.
My questions are:
1. Will 'pg_dumpall -g' dump everything not dumped by pg_dump? Will I
be missing anything?
2. Does anyone foresee any problems with the solution above?
Thanks,
Andrew Gould
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Fuhr | 2005-06-21 02:39:08 | Re: External (asynchronous) notifications of database updates |
Previous Message | Jason Tesser | 2005-06-21 01:25:00 | problems with types after update to 8.0 |