From: | Bruce Momjian <bruce(at)momjian(dot)us> |
---|---|
To: | Josh Berkus <josh(at)agliodbs(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Petr Jelinek <petr(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Horizontal scalability/sharding |
Date: | 2015-09-02 23:14:43 |
Message-ID: | 20150902231443.GB23640@momjian.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Sep 2, 2015 at 12:03:36PM -0700, Josh Berkus wrote:
> Well, there is a WIP patch for that, which IMHO would be much improved
> by having a concrete use-case like this one. What nobody is working on
> -- and we've vetoed in the past -- is a way of automatically failing and
> removing from replication any node which repeatedly fails to sync, which
> would be a requirement for this model.
>
> You'd also need a way to let the connection nodes know when a replica
> has fallen behind so that they can be taken out of
> load-balancing/sharding for read queries. For the synchronous model,
> that would be "fallen behind at all"; for asynchronous it would be
> "fallen more than ### behind".
I think this gets back to the idea of running an administrative alert
command when we switch to using a different server for
synchronous_standby_names. We can't just keep requiring external
tooling to identify things that the database knows easily and can send
an alert. Removing failed nodes is also something we should do and
notify users about.
--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ Everyone has their own god. +
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2015-09-02 23:15:39 | Re: Pg_upgrade remote copy |
Previous Message | Andres Freund | 2015-09-02 23:12:29 | Re: Memory prefetching while sequentially fetching from SortTuple array, tuplestore |