Re: dealing with file size when archiving databases

From: Vivek Khera <vivek(at)khera(dot)org>
To: Postgres General <pgsql-general(at)postgresql(dot)org>
Subject: Re: dealing with file size when archiving databases
Date: 2005-06-21 13:56:25
Message-ID: EBC16E9E-8140-475B-8E50-2E928EDD6CA4@khera.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general


On Jun 20, 2005, at 10:28 PM, Andrew L. Gould wrote:

> compressed database backups is greater than 1GB; and the results of a
> gzipped pg_dumpall is approximately 3.5GB. The processes for creating
> the iso image and burning the image to DVD-R finish without any
> problems; but the resulting file is unreadable/unusable.

I ran into this as well. Apparently FreeBSD will not read a large
file on an ISO file system even though on a standard UFS or UFS2 fs
it will read files larger than you can make :-).

What I used to do was "split -b 1024m my.dump my.dump-split-" to
create multiple files and burn those to the DVD. To restore, you
"cat my.dump.split.?? | pg_restore" with appropriate options to
pg_restore.

My ultimate fix was to start burning and reading the DVD's on my
MacOS desktop instead, which can read/write these large files just
fine :-)

Vivek Khera, Ph.D.
+1-301-869-4449 x806

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Prasad Duggineni 2005-06-21 14:01:00 Re: 8.03 postgres install error
Previous Message Patrick.FICHE 2005-06-21 13:55:04 Re: renumber id's in correct order (compact id's)