From: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
---|---|
To: | Steve Burrows <steve(at)jla(dot)com> |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: Backing up large databases |
Date: | 2006-05-03 07:55:57 |
Message-ID: | 1146642957.449.11.camel@localhost.localdomain |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Fri, 2006-04-28 at 15:57 +0000, Steve Burrows wrote:
> I am struggling to find an acceptable way of backing up a PostgreSQL
> 7.4 database.
>
> The database is quite large, currently it occupies about 180GB,
> divided into two elements, a set of active tables and a set of archive
> tables which are only used for insertions.
>
> I ran pg_dump -Fc recently, it took 23.5 hours to run, and output a
> single file of 126GB. Obviously as the database continues to grow it
> will soon be so large that it cannot be pg_dumped within a day.
> Running rsync to do a complete fresh copy of the pgsql file structure
> took 4 hours, but later that day running another iteration of rsync
> (which should have only copied changed files) took 3 hours, and I
> cannot afford to have the db down that long.
>
> Anybody with any ideas?
You need not backup the whole database in one go.
You can copy only changed data out of the tables using your knowledge of
the mail store. That way you'll not need to have such a long running
backup and it won't be so large either. You can then reassemble the
pieces later into a new table in case of recovery.
--
Simon Riggs
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Manlio Perillo | 2006-05-03 08:13:21 | Re: Audit Logs, Tables and Triggers using PLTCL (plain text) |
Previous Message | Hogan, James F. Jr. | 2006-05-02 19:58:23 | Re: Audit Logs, Tables and Triggers using PLTCL (plain text) RESOLVED |