From: | Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at> |
---|---|
To: | David Ventimiglia <davidaventimiglia(at)hasura(dot)io>, pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Help with a good mental model for estimating PostgreSQL throughput |
Date: | 2023-10-30 15:46:26 |
Message-ID: | c7e89fed5f1c544ba022b0332a492bdbe508e2fc.camel@cybertec.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, 2023-10-30 at 08:05 -0700, David Ventimiglia wrote:
> Can someone help me develop a good mental model for estimating PostgreSQL throughput?
> Here's what I mean. Suppose I have:
> * 1000 connections
> * typical query execution time of 1ms
> * but additional network latency of 100ms
> What if at all would be an estimate of the number of operations that can be performed
> within 1 second? My initial guess would be ~10000, but then perhaps I'm overlooking
> something. I expect a more reliable figure would be obtained through testing, but
> I'm looking for an a priori back-of-the-envelope estimate. Thanks!
It depends on the number of cores, if the workload is CPU bound.
If the workload is disk bound, look for the number of I/O requests a typical query
needs, and how many of them you can perform per second.
The network latency might well be a killer.
Use pgBouncer with transaction mode pooling.
Yours,
Laurenz Albe
From | Date | Subject | |
---|---|---|---|
Next Message | David Ventimiglia | 2023-10-30 15:59:27 | Re: Help with a good mental model for estimating PostgreSQL throughput |
Previous Message | David Ventimiglia | 2023-10-30 15:05:20 | Help with a good mental model for estimating PostgreSQL throughput |