From: | Vivek Khera <khera(at)kcilink(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Linux ready for high-volume databases? |
Date: | 2003-08-27 16:30:05 |
Message-ID: | x7n0dvoygi.fsf@yertle.int.kciLink.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
>>>>> "GS" == Greg Stark <gsstark(at)mit(dot)edu> writes:
GS> the first approach. I'm thinking taking a pg_dump regularly
GS> (nightly if I can get away with doing it that infrequently)
GS> keeping the past n dumps, and burning a CD with those dumps.
Basically what I do. I burn a set of CDs from one of my dumps once a
week, and keep the rest online for a few days. I'm really getting
close to splurging for a DVD writer since my dumps are way too big for
a single CD.
GS> This doesn't provide what online backups do, of recovery to the
GS> minute of the crash. And I get nervous having only logical pg_dump
GS> output, no backups of the actual blocks on disk. But is that what
GS> everybody does?
Well, if you want backups of the blocks on disk, then you need to shut
down the postmaster so that it is a consistent copy. You can't copy
the table files "live" this way.
So, yes, having the pg_dump is pretty much your safest bet to have a
consistent dump. And using a replicated slave with, eg, eRServer, is
also another way, but that requires more hardware.
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D. Khera Communications, Inc.
Internet: khera(at)kciLink(dot)com Rockville, MD +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/
From | Date | Subject | |
---|---|---|---|
Next Message | Joseph Shraibman | 2003-08-27 16:41:22 | FATAL: Socket command type A unknown |
Previous Message | Vivek Khera | 2003-08-27 16:23:59 | Re: Linux ready for high-volume databases? |