From: | Geervan Hayatnagarkar <pande(dot)arti(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | High-volume writes - what is the max throughput possible |
Date: | 2021-03-25 18:12:15 |
Message-ID: | CAP=n9p2i7HQzfs0XkP23RLEjuTLREVJU+12HxQURceJhSFTA6g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi,
We are trying to find maximum throughput in terms of transactions per
second (or simultaneous read+write SQL operations per second) for a use
case that does one ACID transaction (consisting of tens of reads and tens
of updates/ inserts) per incoming stream element on a high-volume
high-velocity stream of data.
Our load test showed us that PostgreSQL version 11/12 could support upto
10,000 to 11,000 such ACID transactions per second = 55K read SQL
operations per second along with simultaneous 77 K write SQL operations per
second (= total 132 K total read+write SQL operations per second)
The underlying hardware limit is much more. But is this the maximum
PostgreSQL can support? If not what are the server tuning parameters we
should consider for this scale of throughput ?
Thanks,
Arti
From | Date | Subject | |
---|---|---|---|
Next Message | Nikhil Shetty | 2021-03-26 11:41:00 | Re: How do we hint a query to use index in postgre |
Previous Message | Pavel Stehule | 2021-03-25 08:05:43 | Re: proposal: schema variables - doc |