From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Andrew L(dot) Gould" <algould(at)datawok(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: dealing with file size when archiving databases |
Date: | 2005-06-21 02:53:30 |
Message-ID: | 19523.1119322410@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"Andrew L. Gould" <algould(at)datawok(dot)com> writes:
> I've been backing up my databases by piping pg_dump into gzip and
> burning the resulting files to a DVD-R. Unfortunately, FreeBSD has
> problems dealing with very large files (>1GB?) on DVD media. One of my
> compressed database backups is greater than 1GB; and the results of a
> gzipped pg_dumpall is approximately 3.5GB. The processes for creating
> the iso image and burning the image to DVD-R finish without any
> problems; but the resulting file is unreadable/unusable.
Yech. However, I think you are reinventing the wheel in your proposed
solution. Why not just use split(1) to divide the output of pg_dump or
pg_dumpall into slices that the DVD software won't choke on? See
notes at
http://developer.postgresql.org/docs/postgres/backup.html#BACKUP-DUMP-LARGE
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2005-06-21 03:14:30 | Re: dealing with file size when archiving databases |
Previous Message | Michael Fuhr | 2005-06-21 02:39:08 | Re: External (asynchronous) notifications of database updates |