High Reliability without High Availability?

From: Al Cohen <amc79(at)no(dot)junk(dot)please(dot)cornell(dot)edu>
To: pgsql-general(at)postgresql(dot)org
Subject: High Reliability without High Availability?
Date: 2004-03-19 12:29:11
Message-ID: rP6cncobCOMEesfdRWPC-w@speakeasy.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

We've been using PostgreSQL for some time, and it's been very, very
reliable. However, we're starting to think about preparing for
something bad happening - dead drives, fires, locusts, and whatnot.

In our particular situation, being down for two hours or so is OK.
What's really bad is losing data.

The PostgreSQL replication solutions that we're seeing are very clever,
but seem to require significant effort to set up and keep going. Since
we don't care if a slave DB is ready to kick over at a moment's notice,
I'm wondering if there is some way to generate data, in real time, that
would allow an offline rebuild in the event of catastrophe. We could
copy this data across the 'net as it's available, so we could be OK even
if the place burned down.

Is there a log file that does or could do this? Or some internal system
table that we could use to generate something?

Thanks!

Al Cohen

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Harald Fuchs 2004-03-19 12:36:59 Re: "People near me" query
Previous Message David Garamond 2004-03-19 11:45:58 Re: "People near me" query