From: | Marco Colli <collimarco91(at)gmail(dot)com> |
---|---|
To: | Gunther Schadow <raj(at)gusw(dot)net> |
Cc: | pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | Re: Conundrum with scaling out of bottleneck with hot standby, PgPool-II, etc. |
Date: | 2020-12-23 18:34:01 |
Message-ID: | CAFvCgN7nTA5wEkKbeTg9fOOExAVjhxBUvZvho2uA0=nrDV9+4g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hello,
I asked the same question to myself in the past years.
I think that the question boils down to:
How can I achieve unlimited database scalability?
Is it possible to have linear scalability (i.e. throughput increases
proportionally to the number of nodes)?
The answer is "sharding". It can be a custom solution or a database that
supports it automatically. In this way you can actually split data across
multiple nodes and the client contacts only the relevant servers for that
data (based on a shard key). See also
https://kubernetes-rails.com/#conclusion about database. Take a look at how
Cassandra, MongoDB, CouchDB and Redis Cluster work for example:
however there are huge limitations / drawbacks that come along with their
unlimited-scalability strategies.
For hot standbys, those are only useful if you have a relatively small
number of writes compared to reads. With that slave nodes you only scale
the *read* throughput.
Hope it helps,
Marco Colli
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2020-12-23 19:09:19 | Re: Conundrum with scaling out of bottleneck with hot standby, PgPool-II, etc. |
Previous Message | Gunther Schadow | 2020-12-23 18:01:07 | Conundrum with scaling out of bottleneck with hot standby, PgPool-II, etc. |