From: | Fujii Masao <masao(dot)fujii(at)gmail(dot)com> |
---|---|
To: | Josh Berkus <josh(at)agliodbs(dot)com> |
Cc: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Sawada Masahiko <sawada(dot)mshk(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Beena Emerson <memissemerson(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>, Peter Eisentraut <peter_e(at)gmx(dot)net> |
Subject: | Re: Support for N synchronous standby servers - take 2 |
Date: | 2015-07-02 06:12:15 |
Message-ID: | CAHGQGwGU2DV0K17sHzyfLVAfq_cZm5ijAYGLwY7HkSgyX0brOw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Jul 2, 2015 at 3:21 AM, Josh Berkus <josh(at)agliodbs(dot)com> wrote:
> All:
>
> Replying to multiple people below.
>
> On 07/01/2015 07:15 AM, Fujii Masao wrote:
>> On Tue, Jun 30, 2015 at 2:40 AM, Josh Berkus <josh(at)agliodbs(dot)com> wrote:
>>> You're confusing two separate things. The primary manageability problem
>>> has nothing to do with altering the parameter. The main problem is: if
>>> there is more than one synch candidate, how do we determine *after the
>>> master dies* which candidate replica was in synch at the time of
>>> failure? Currently there is no way to do that. This proposal plans to,
>>> effectively, add more synch candidate configurations without addressing
>>> that core design failure *at all*. That's why I say that this patch
>>> decreases overall reliability of the system instead of increasing it.
>>
>> I agree this is a problem even today, but it's basically independent from
>> the proposed feature *itself*. So I think that it's better to discuss and
>> work on the problem separately. If so, we might be able to provide
>> good way to find new master even if the proposed feature finally fails
>> to be adopted.
>
> I agree that they're separate features. My argument is that the quorum
> synch feature isn't materially useful if we don't create some feature to
> identify which server(s) were in synch at the time the master died.
>
> The main reason I'm arguing on this thread is that discussion of this
> feature went straight into GUC syntax, without ever discussing:
>
> * what use cases are we serving?
> * what features do those use cases need?
>
> I'm saying that we need to have that discussion first before we go into
> syntax. We gave up on quorum commit in 9.1 partly because nobody was
> convinced that it was actually useful; that case still needs to be
> established, and if we can determine *under what circumstances* it's
> useful, then we can know if the proposed feature we have is what we want
> or not.
>
> Myself, I have two use case for changes to sync rep:
>
> 1. the ability to specify a group of three replicas in the same data
> center, and have commit succeed if it succeeds on two of them. The
> purpose of this is to avoid data loss even if we lose the master and one
> replica.
>
> 2. the ability to specify that synch needs to succeed on two replicas in
> two different data centers. The idea here is to be able to ensure
> consistency between all data centers.
Yeah, I'm also thinking those *simple* use cases. I'm not sure
how many people really want to have very complicated quorum
commit setting.
> Speaking of which: how does the proposed patch roll back the commit on
> one replica if it fails to get quorum?
You meant the case where there are two sync replicas and the master
needs to wait until both send the ACK, then only one replica goes down?
In this case, the master receives the ACK from only one replica and
it must keep waiting until new sync replica appears and sends back
the ACK. So the committed transaction (written WAL record) would not
be rolled back.
> Well, one possibility is to have each replica keep a flag which
> indicates whether it thinks it's in sync or not. This flag would be
> updated every time the replica sends a sync-ack to the master. There's a
> couple issues with that though:
I don't think this is good approach because there can be the case where
you need to promote even the standby server not having sync flag.
Please imagine the case where you have sync and async standby servers.
When the master goes down, the async standby might be ahead of the
sync one. This is possible in practice. In this case, it might be better to
promote the async standby instead of sync one. Because the remaining
sync standby which is behind can easily follow up with new master.
We can promote the sync standby in this case. But since the remaining
async standby is ahead, it's not easy to follow up with new master.
Probably new base backup needs to be taken onto async standby from
new master, or pg_rewind needs to be executed. That is, the async
standby basically needs to be set up again.
So I'm thinking that we basically need to check the progress on each
standby to choose new master.
Regards,
--
Fujii Masao
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Langote | 2015-07-02 06:29:09 | Re: Support for N synchronous standby servers - take 2 |
Previous Message | Kyotaro HORIGUCHI | 2015-07-02 06:07:40 | Re: Asynchronous execution on FDW |