From: | Henrik <henke(at)mac(dot)se> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | PG-8.2 backup strategies |
Date: | 2008-01-21 15:05:13 |
Message-ID: | 1E32DA8B-B22B-4B27-9B42-0FFFA28A0C7A@mac.se |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello list,
I know backup of pg is a well discussed topic and ranges in solution
from simple pg_dump to more advanced PITR with Slony.
Even though I've studied most of them I can't really decide the best
solution for a new situation and would be grateful for any inputs on
this.
The situation is as follow.
We want to do a daily backups from many installations to a remote
location and also want easy restores when disaster strikes. Preferable
the backup site would only need an ftp server to store the files on.
My optimal solution would be differential pg_dumps but that is not
possible as far as I know. Doing pg_dumps every day is a little to
heavy even though the db's are not huge. I like the fact that I have
one big SQL file which is really simple to restore with.
The next best solution would probably be weekly pg_dumps with daily
wal shipping. But how would this handle tables with columns that get
tiny updates several times per second? Do I get huge WALs or?
I really would like to avoid a PG installation at the backup site but
maybe that is the best solution? And then use Slony or similar but
only do replication once a day? Then I can make a dump if I need to
restore and ship that SQL file to restore location.
Maybe, I have really weird ideas about this but it would be nice with
some pointers.
Thanks!
//Henke
From | Date | Subject | |
---|---|---|---|
Next Message | Magnus Hagander | 2008-01-21 15:18:01 | Re: PG-8.2 backup strategies |
Previous Message | Albe Laurenz | 2008-01-21 14:43:19 | Re: Views and permissions |