Re: Dump large DB and restore it after all.

From: Condor <condor(at)stz-bg(dot)com>
To: <pgsql-general(at)postgresql(dot)org>
Subject: Re: Dump large DB and restore it after all.
Date: 2011-07-05 11:31:13
Message-ID: ae75615f39f1f8f78cfefef707ec48ea@stz-bg.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, 05 Jul 2011 18:08:21 +0800, Craig Ringer wrote:
> On 5/07/2011 5:00 PM, Condor wrote:
>> Hello ppl,
>> can I ask how to dump large DB ?
>
> Same as a smaller database: using pg_dump . Why are you trying to
> split your dumps into 1GB files? What does that gain you?
>
> Are you using some kind of old file system and operating system that
> cannot handle files bigger than 2GB? If so, I'd be pretty worried
> about running a database server on it.

Well, I make pg_dump on ext3 fs and postgrex 8.x and 9 and sql file was
truncated.

>
> As for gzip: gzip is almost perfectly safe. The only downside with
> gzip is that a corrupted block in the file (due to a hard
> disk/dvd/memory/tape error or whatever) makes the rest of the file,
> after the corrupted block, unreadable. Since you shouldn't be storing
> your backups on anything that might get corrupted blocks, that should
> not be a problem. If you are worried about that, you're better off
> still using gzip and using an ECC coding system like par2 to allow
> recovery from bad blocks. The gzipd dump plus the par2 file will be
> smaller than the uncompressed dump, and give you much better
> protection against errors than an uncompressed dump will.
>
> To learn more about par2, go here:
>
> http://parchive.sourceforge.net/

Thank you for info.

> --
> Craig Ringer
>

--
Regards,
Condor

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Geoffrey Myers 2011-07-05 12:05:16 Re: out of memory error
Previous Message Alexander Shulgin 2011-07-05 10:27:55 Select count(*) /*from*/ table