From: | Thomas Kellerer <spam_eater(at)gmx(dot)net> |
---|---|
To: | pgsql-admin(at)lists(dot)postgresql(dot)org |
Subject: | Re: PostgreSQL high availability solutions for high rates. |
Date: | 2019-03-20 13:34:56 |
Message-ID: | 9123d21d-b379-2c6b-3e8c-95e236e20d91@gmx.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
> we are evaluating PostgreSQL for our productions,
> for a system with 1000 updates per second,
> 1000 inserts per sec,
> ~1000 deletes per second1
That adds up to 3000 transactions per second, but this:
> 250,000 transactions per day
only results in about 3 transactions per second.
But even 3000 transactions per second can be sustained with good hardware (especially fast SSDs) without problems
as long as there are at least some "quiet" times where autovacuum can catch up with the updates and deletes.
> we are looking for “active-active (r/w) (r/w)” , “active(r/w) -active(read) “
> and “active(r/w) stand-by” high availability solution,
Postgres only supports read-only replicas where you can distribute the load of SELECT queries.
It does not offer a solution to distribute write load over multiple nodes.
> what High availability solutions ( PG internal or 3rd party) are used successfully, keeping up with the similar rates? double of those rates?
A master/slave system (including multiples slaves) can be set up without the need of 3rd party tools.
If you want to distribute read requests across slaves you will need a 3rd part tool, e.g. pgPool or pgBouncer.
To sustain a high write throughput on a single node you need a fast I/O system.
Typically this is done with several really fast SSDs or NVMe disks nowadays.
The more CPUs you have, the better.
From | Date | Subject | |
---|---|---|---|
Next Message | Laurenz Albe | 2019-03-20 14:59:44 | Re: error: database is not accepting commands to avoid wraparound data loss in database .... |
Previous Message | Shreeyansh Dba | 2019-03-20 12:52:22 | Re: PostgreSQL high availability solutions for high rates. |