From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | Dennis Gearon <gearond(at)fireserve(dot)net> |
Cc: | Greg Stark <gsstark(at)mit(dot)edu>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Linux ready for high-volume databases? |
Date: | 2003-08-27 04:35:29 |
Message-ID: | 87smnnn2em.fsf@stark.dyndns.tv |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Dennis Gearon <gearond(at)fireserve(dot)net> writes:
> With the low cost of disks, it might be a good idea to just copy to disks, that
> one can put back in.
Uh, sure, using hardware raid 1 and breaking one set of drives out of the
mirror to perform the backup is an old trick. And for small databases backups
are easy that way. Just store a few dozen copies of the pg_dump output on your
live disks for local backups and burn CD-Rs for offsite backups.
But when you have hundreds of gigabytes of data and you want to be able to
keep multiple snapshots of your database both on-site and off-site... No, you
can't just buy another hard drive and call it a business continuity plan.
As it turns out my current project will be quite small. I may well be adopting
the first approach. I'm thinking taking a pg_dump regularly (nightly if I can
get away with doing it that infrequently) keeping the past n dumps, and
burning a CD with those dumps.
This doesn't provide what online backups do, of recovery to the minute of the
crash. And I get nervous having only logical pg_dump output, no backups of the
actual blocks on disk. But is that what everybody does?
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Stark | 2003-08-27 04:37:26 | Re: move to usenet? |
Previous Message | Dennis Gearon | 2003-08-27 04:33:59 | Re: Replication Ideas |