Re: Hot Standby vs slony

From: bricklen <bricklen(at)gmail(dot)com>
To: Mark Steben <mark(dot)steben(at)drivedominion(dot)com>
Cc: pgsql-admin <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Hot Standby vs slony
Date: 2018-02-08 22:29:52
Message-ID: CAGrpgQ8X4Q2s2jGHK=OcTf7kNfPFtgthtXzo1JA3zqj9mMSzFg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

On Thu, Feb 8, 2018 at 1:09 PM, Mark Steben <mark(dot)steben(at)drivedominion(dot)com>
wrote:

> Good afternoon,
>
> We currently run postgres 9.4 We currently run the following:
>
> ------------------------------
> ----------------|
> ---------------- |--> slony (reporting, hi-availabilty) |
> production | ----->| ---------------------------------------------
> |
> ---------------- |
> |--------------------------------------|
> |-> hot standby (dr) |
> ---------------------------------------|
>
> We would like to replace slony with another instance of hot standby as
> follows:
>
>
> ----------------------------------------------|
> ---------------- |--> hot standby1 (reporting, ha) |
> production | ----->| --------------------------------------------|
> |
> ---------------- |
> |--------------------------------------|
> |-> hot standby2 (dr) |
> ---------------------------------------|
>
> Is this possible? I see in the documentation it is possible for warm
> standby but don't
> see a confirmation in the section on hot standby.
>
> ​
Yes, you can run multiple hot standby's from the primary, or cascade the
hot standby's from each other (and combinations of both).

I can say that with confidence as one of the common configurations I'm
running (for roughly 1500 servers) consists of a primary PG cluster with a
hot standby using streaming replication (async replication) within the same
data centre, with a remote "primary" hot standby fed by WAL shipping, and a
remote hot standby ​streaming off that. The remote primary is running with
delayed WAL application, which varies between 1 and 4 hours, depending on
the class of replica sets we are running. This configuration covers basic
DR, HA, and in case of user-error we can fail over (promote the remote
primary replica before any user-destructive changes are applied to the
remote hot standby). One of the caveats is that a sudden interruption
between DC's followed by a failover could result in some data loss,
depending on the archive_timeout/WAL switch rate etc, but that's a business
RPO that we've agreed upon with clients.

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Azimuddin Mohammed 2018-02-09 17:36:44 initdb execution
Previous Message Mark Steben 2018-02-08 21:09:21 Hot Standby vs slony