From: | Adam Haberlach <adam(at)newsnipple(dot)com> |
---|---|
To: | Bryan White <bryan(at)arcamax(dot)com> |
Cc: | pgsql-general <pgsql-general(at)postgreSQL(dot)org> |
Subject: | Re: pg_dump's over 2GB |
Date: | 2000-09-29 17:51:58 |
Message-ID: | 20000929105158.A10745@ricochet.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, Sep 29, 2000 at 12:15:26PM -0400, Bryan White wrote:
> My current backups made with pg_dump are currently 1.3GB. I am wondering
> what kind of headaches I will have to deal with once they exceed 2GB.
>
> What will happen with pg_dump on a Linux 2.2.14 i386 kernel when the output
> exceeds 2GB?
> Currently the dump file is later fed to a 'tar cvfz'. I am thinking that
> instead I will need to pipe pg_dumps output into gzip thus avoiding the
> creation of a file of that size.
>
> Does anyone have experince with this sort of thing?
We have had some problems with tar silently truncated some > 2Gb files
during a backup. We also had to move the perforce server from Linux to
BSD because some checkpoint files were truncating at 2Gb (not a perforce
problem, but a Linux one).
Be careful, test frequently, etc...
--
Adam Haberlach | A billion hours ago, human life appeared on
adam(at)newsnipple(dot)com | earth. A billion minutes ago, Christianity
http://www.newsnipple.com | emerged. A billion Coca-Colas ago was
'88 EX500 | yesterday morning. -1996 Coca-Cola Ann. Rpt.
From | Date | Subject | |
---|---|---|---|
Next Message | Stephan Szabo | 2000-09-29 18:45:32 | Re: Checking number of entries |
Previous Message | Dominic J. Eidson | 2000-09-29 17:49:09 | Re: Fw: Redhat 7 and PgSQL |