From: | Florian Pflug <fgp(dot)phlo(dot)org(at)gmail(dot)com> |
---|---|
To: | Simon Riggs <simon(at)2ndQuadrant(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Hiroyuki Yamada <yamada(at)kokolink(dot)net>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Peter Eisentraut <peter_e(at)gmx(dot)net>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: alpha3 release schedule? |
Date: | 2009-12-22 15:32:42 |
Message-ID: | 4B30E69A.3010809@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 22.12.09 13:21 , Simon Riggs wrote:
> On Tue, 2009-12-22 at 12:32 +0100, Florian Pflug wrote:
>> Image a reporting database where all transactions but a few daily
>> bulk imports are read-only. To spread the load, you do your bulk
>> loads on the master, but run the reporting queries against a
>> read-only HS slave. Now you take the master down for maintenance.
>> Since all clients but the bulk loader use the slave already, and
>> since the bulk loads can be deferred until after the maintenance
>> window closes again, you don't actually do a fail-over.
>>
>> Now you're already pointing at your foot with the gun. All it
>> takes to ruin your day is *some* reason for the slave to restart.
>> Maybe due to a junior DBA's typo, or maybe due to a bug in
>> postgres. Anway, once the slave is down, it won't come up until you
>> manage to get the master up and running again. And this limitation
>> is pretty surprising, since one would assume that if the slave
>> survives a *crash* of the master, it'd certainly survive a simple
>> *shutdown*.
>
> Well, you either wait for master to come up again and restart, or you
> flip into normal mode and keep running queries from there. You aren't
> prevented from using the server, except by your own refusal to
> failover.
Very true. However, that "refusal" as you put it might actually be the
most sensible thing to do in a lot of setups. Not everyone needs extreme
up-time guarantees, and for those people setting up, testing and
*continuously* exercising fail-over is just not worth the effort.
Especially since fail-over with asynchronous replication is tricky to
get right if you want to avoid data loss.
So I still believe that there are very real use-cases for HS where this
limitation can be quite a PITA.
But you are of course free to work on whatever you feel like, and
probably need to satisfy your client's needs first. So I'm in no way
implying that this issue is a must-fix issue, or that you're in any way
obliged to take care of it. I merely wanted to make the point that there
*are* valid use-cases where this behavior is not ideal.
best regards,
Florian Pflug
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2009-12-22 15:37:38 | Re: Tuplestore should remember the memory context it's created in |
Previous Message | Tom Lane | 2009-12-22 15:01:44 | Re: Tuplestore should remember the memory context it's created in |