Manual failover cluster

From: Hispaniola Sol <moishap(at)hotmail(dot)com>
To: "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Manual failover cluster
Date: 2021-08-20 12:48:43
Message-ID: SA1PR13MB5023BFAC9954144ECBE2181184C19@SA1PR13MB5023.namprd13.prod.outlook.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Team,

I have a pg 10 cluster with a master and two hot-standby nodes. There is a requirement for a manual failover (nodes switching the roles) at will. This is a vanilla 3 node PG cluster that was built with WAL archiving (central location) and streaming replication to two hot standby nodes. The failover is scripted in Ansible. Ansible massages and moves around the archive/restore scripts, the conf files and the trigger and calls ` pg_ctlcluster` to start/stop. This part _seems_ to be doing the job fine.

The issue I am struggling with is the apparent fragility of the process - all 3 nodes will end up in a "good" state after the switch only every other time. Other times I have to rebase the hot-standby from the new master with pg_basebackup. It seems the issues are mostly with those nodes, ending up as slaves after the roles switch runs.
They get errors like mismatch in timelines, recovering from the same WAL over and over again, invalid resource manager ID in primary checkpoint record, etc.

In this light, I am wondering - using what's offered by PostgreSQL itself, i.e. streaming WAL replication with log shipping - can I expect to have this kind of failover 100% reliable on PG side ? Anyone is doing this reliably on PostgreSQL 10.1x ?

Thanks !

Moishe

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Jayadevan M 2021-08-20 13:25:23 Re: log_statement setting
Previous Message Rich Shepard 2021-08-20 12:37:25 Re: Selecting table row with latest date