From: | "Andrew L(dot) Gould" <algould(at)datawok(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: dealing with file size when archiving databases |
Date: | 2005-06-21 03:44:57 |
Message-ID: | 200506202244.57084.algould@datawok.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Monday 20 June 2005 09:53 pm, Tom Lane wrote:
> "Andrew L. Gould" <algould(at)datawok(dot)com> writes:
> > I've been backing up my databases by piping pg_dump into gzip and
> > burning the resulting files to a DVD-R. Unfortunately, FreeBSD has
> > problems dealing with very large files (>1GB?) on DVD media. One
> > of my compressed database backups is greater than 1GB; and the
> > results of a gzipped pg_dumpall is approximately 3.5GB. The
> > processes for creating the iso image and burning the image to DVD-R
> > finish without any problems; but the resulting file is
> > unreadable/unusable.
>
> Yech. However, I think you are reinventing the wheel in your
> proposed solution. Why not just use split(1) to divide the output of
> pg_dump or pg_dumpall into slices that the DVD software won't choke
> on? See notes at
> http://developer.postgresql.org/docs/postgres/backup.html#BACKUP-DUMP
>-LARGE
>
> regards, tom lane
Thanks, Tom! The split option also fixes the problem; whereas my
"solution", only delays the problem until a table gets too large. Of
course, at that point, I should probably use something other than
DVD's.
Andrew Gould
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew L. Gould | 2005-06-21 03:48:14 | Re: dealing with file size when archiving databases |
Previous Message | William Yu | 2005-06-21 03:29:40 | Re: PostgreSQL Developer Network |