From: | "Abraham, Danny" <danny_abraham(at)bmc(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | RE: Re: too many clients already |
Date: | 2020-04-02 17:13:58 |
Message-ID: | 1b2dd9f85ea945dd80d3ceb194115612@phx-exmbprd-01.adprod.bmc.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Agree.
I suspect that this is a mal configured pgpool - the developer thinks that the pool is reusing connections,
While it is, in fact, reopening them.
-----Original Message-----
From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Sent: Thursday, April 02, 2020 7:40 PM
To: Abraham, Danny <danny_abraham(at)bmc(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: [EXTERNAL] Re: too many clients already
"Abraham, Danny" <danny_abraham(at)bmc(dot)com> writes:
> Well, I guess the questions is - how do I optimize PG for a stream of very short life checks...
You should be using a connection pooler for a load like that.
PG backends are fairly heavyweight things --- you don't want to fire one up for just a single query, at least not when there are many such queries per second.
I think pgbouncer and pgpool are the most widely used options, but this is a bit outside my expertise.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | postgann2020 s | 2020-04-02 19:37:43 | Could someone please help us share the procedure to troubleshoot the locks on proc issues. |
Previous Message | Tom Lane | 2020-04-02 16:40:22 | Re: too many clients already |