AW: PostgreSQL high availability solutions for high rates.

From: Stephan Schmidt <schmidt(at)dltmail(dot)de>
To: Thomas Kellerer <spam_eater(at)gmx(dot)net>, "pgsql-admin(at)lists(dot)postgresql(dot)org" <pgsql-admin(at)lists(dot)postgresql(dot)org>
Subject: AW: PostgreSQL high availability solutions for high rates.
Date: 2019-03-20 20:37:31
Message-ID: BN7PR08MB4307DB59BDF1D47BB3F66CD7E8410@BN7PR08MB4307.namprd08.prod.outlook.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

> > we are evaluating PostgreSQL for our productions,
> > for a system with 1000 updates per second,
> > 1000 inserts per sec,
> > ~1000 deletes per second1
> That adds up to 3000 transactions per second, but this:
>
> > 250,000 transactions per day
>
> only results in about 3 transactions per second.
>
> But even 3000 transactions per second can be sustained with good hardware (especially fast SSDs) without problems
> as long as there are at least some "quiet" times where autovacuum can catch up with the updates and deletes.
>
> > we are looking for “active-active (r/w) (r/w)” , “active(r/w) -active(read) “
> > and “active(r/w) stand-by” high availability solution,
>
> Postgres only supports read-only replicas where you can distribute the load of SELECT queries.
> It does not offer a solution to distribute write load over multiple nodes.

i can recommend BDR extension which comes at a prices if you need a proper Master-Master cluster setup. But you could also setup a couple of standalone postgresql
instances which replicate the written data via logical replication to a central database. this way you can scale IO load and have many master nodes for redundancy.
if this approach works for you depends on your use cases.

> > what High availability solutions ( PG internal or 3rd party) are used successfully, keeping up with the similar rates? double of those rates?
>
> A master/slave system (including multiples slaves) can be set up without the need of 3rd party tools.
>
> If you want to distribute read requests across slaves you will need a 3rd part tool, e.g. pgPool or pgBouncer.
>
> To sustain a high write throughput on a single node you need a fast I/O system.
> Typically this is done with several really fast SSDs or NVMe disks nowadays.
>
> The more CPUs you have, the better.

Cheers, stephan

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message M Jane 2019-03-21 08:28:55 Running postgresql databases in Docker
Previous Message Laurenz Albe 2019-03-20 14:59:44 Re: error: database is not accepting commands to avoid wraparound data loss in database ....