From: | D Johnson <dspectra(at)home(dot)com> |
---|---|
To: | Tim White <twhite26(at)kc(dot)rr(dot)com> |
Cc: | Ragnar Kjørstad <postgres(at)ragnark(dot)vestdata(dot)no>, pgsql-admin(at)postgresql(dot)org |
Subject: | Re: BAcking up a Postgres Database |
Date: | 2001-01-12 00:12:19 |
Message-ID: | 3A5E4BE3.1CFF9C04@home.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Yes, the first method will not be valid because you cannot just take down your
system anytime you need to do a backup, plus it's risky anytime you stop and
start a large server application that manages data. The pg_dump method would
be better.
I have worked extensively with Oracle database apps. that collect data 24/7.
This test data is highly critical and you need to have a transactional logging
system, recovery and roll forward capability as Tim indicated. We have saved
our rear ends a number times thanks to Oracles archiving.
I was wondering about pg_dump and transactional recovery and making it
similar to Oracles "Rman" recovery app and recovery database. This would
probably make Postgres robust in that it would provide good integrity and
allow you to develop drivers for various backup devices i.e tape drives.
Dave Johnson
Tim White wrote:
> Does this provide true "point of failure" recovery? This sounds like no
> more than a cold backup,
> which does not provide "point of failure" recovery. I think the original
> question is very valid. Postgres
> does not, to my knowledge, support transaction logging, which is necessary
> for this style of recovery.
> In Oracle, you restore the data files from a previous backup and then
> re-apply the transaction (archive)
> logs, a process called "rolling forward", then you can open the database
> for use, and it is in the state
> just prior to the failure. I've seen some creative dialogue on this list
> about writing to multiple database
> instances to have a live backup, and some regarding logging each SQL
> statement, but the introduction
> of a transaction archiver into the engine itself would make this process
> much easier and make Postgres
> more attractive to sites currently using the major commercial database
> packages, IMHO.
>
> Let me know if any of this is blatantly incorrect.
>
> Tim White
>
> Ragnar Kjørstad wrote:
>
> > On Wed, Jan 10, 2001 at 05:57:21AM -0600, D Johnson wrote:
> > > Will the postgres community ever consider creating a decent backup
> > > capability. Currently the way to create backups is through a Cron job.
> > > For Postgres to ever be considered a true production systen then some
> > > sort of transactional tracing has to be done. Otherwise you risk the
> > > potential of losing quite a bit of data.
> >
> > You can take a snapshot of the database-device (while the database is
> > down), and backup from the snapshot to avoid this problem.
> >
> > You need a volum-manager that support snapshot, but AFAIK most do.
> >
> > --
> > Ragnar Kjørstad
> > BigStorage
From | Date | Subject | |
---|---|---|---|
Next Message | Ulf Thorsen | 2001-01-12 10:09:14 | Desperately seeking data... |
Previous Message | Mikheev, Vadim | 2001-01-11 19:49:45 | RE: v7.1 & WAL |