From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Sameer Kumar <sameer(dot)kumar(at)ashnik(dot)com> |
Cc: | Susan Cassidy <susan(dot)cassidy(at)decisionsciencescorp(dot)com>, Dmitry Koterov <dmitry(dot)koterov(at)gmail(dot)com>, Postgres General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Fully-automatic streaming replication failover when master dies? |
Date: | 2014-01-24 02:41:54 |
Message-ID: | CAOR=d=1ReRnPa9nVHk8OzRx2E-oM2vim7xf=xeJoLn6YP-KWOQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Jan 23, 2014 at 7:16 PM, Sameer Kumar <sameer(dot)kumar(at)ashnik(dot)com> wrote:
>
>
> On Fri, Jan 24, 2014 at 1:38 AM, Susan Cassidy <susan(dot)cassidy(at)decisionsciencescorp(dot)com> wrote:
>>
>> pgpool-II may do what you want. Lots of people use it.
>
>
> I don't think pgpool adds the lost node on its own (once the node is live or available again). Plus if you have a 3 node replication you need to have your own failover_command (as a shell script) which changes the master node for 2nd secondary when one of the secondary servers decides to be promoted to primary). I hope things will get easy with version 9.4 (I guess in 9.4 one won't have to rebuild a master node from backup. if the wal files are available it will just roll forward).
>
>> > for all the machines). At least MongoDB does the work well, and with almost
>> > zero configuration.
>> Mongo's data guarantees are, um, somewhat less robust than
>> PostgreSQL's.
>
>
> I don't think this has anything to do with data reliability or ACID property (if that is what you are referring to).
>
>> Failover is easy if you don't have to be exactly right.
>
>
> IMHO That's not a fair point. PostgreSQL supports sync replication (as well as async) and does that complicate the failover process or an async replication? I guess what he is asking for is automation of whatever feature PostgreSQL already supports.
No it's a fair point. When you go from "we promise to try and not lose
your data" to "we promise to not lose any of your data" the situation
is much different.
There are many things to consider in the postgresql situation. Is it
more important to keep your application up and running, even if only
in read only mode? Is performance more important than data integrity?
How many nodes do you have? How man can auto-fail over before you
auto-fail over to the very last one? How do you rejoin failed nodes,
one at a time, all at once, by hand, automagically? And so on. There
are a LOT of questions to ask that mongo already decided for you, and
the decision was that if you lose some data that's OK as long as the
cluster stays up. With PostgreSQL the decision making process probably
has a big impact on how you answer these types of questions and how
you fail over.
Add to that that most postgresql database servers are VERY robust,
with multi-lane RAID array controllers and / or sturdy SANs underneath
them, and their failure rates are very low, you run the risk of your
auto-failover causing much of an outage as the server failing, since
most failovers are going to cause some short interruption in service.
It's not a simple push a button take a banana, one size fits all
problem and solution.
From | Date | Subject | |
---|---|---|---|
Next Message | Tatsuo Ishii | 2014-01-24 03:28:32 | Re: Fully-automatic streaming replication failover when master dies? |
Previous Message | Scott Marlowe | 2014-01-24 02:35:21 | Re: Fully-automatic streaming replication failover when master dies? |