From: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com> |
---|---|
To: | Israel Brewster <israel(at)ravnalaska(dot)net>, John R Pierce <pierce(at)hogranch(dot)com> |
Cc: | "pgsql-general(at)postgresql(dot)org general" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Determining server load |
Date: | 2016-09-27 18:48:14 |
Message-ID: | 3f7a0d5a-f4f6-548b-1edf-d1d511dee4b3@aklaver.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 09/27/2016 11:40 AM, Israel Brewster wrote:
> On Sep 27, 2016, at 9:55 AM, John R Pierce <pierce(at)hogranch(dot)com> wrote:
>>
>> On 9/27/2016 9:54 AM, Israel Brewster wrote:
>>>
>>> I did look at pgbadger, which tells me I have gotten as high as 62 connections/second, but given that most of those connections are probably very short lived that doesn't really tell me anything about concurrent connections.
>>
>> Each connection requires a process fork of the database server, which is very expensive. you might consider using a connection pool such as pgbouncer, to maintain a fixed(dynamic) number of real database connections, and have your apps connect/disconnect to this pool. Obviously, you need a pool for each database, and your apps need to be 'stateless' and not make or rely on any session changes to the connection so they don't interfere with each other. Doing this correctly can make an huge performance improvement on the sort of apps that do (connect, transaction, disconnect) a lot.
>
> Understood. My main *performance critical* apps all use an internal connection pool for this reason - Python's psycopg2 pool, to be exact. I still see a lot of connects/disconnects, but I *think* that's psycopg2 recycling connections in the background - I'm not 100% certain how the pools there work (and maybe they need some tweaking as well, i.e. setting to re-use connections more times or something). The apps that don't use pools are typically data-gathering scripts where it doesn't mater how long it takes to connect/write the data (within reason).
http://initd.org/psycopg/docs/pool.html
"Note
This pool class is mostly designed to interact with Zope and probably
not useful in generic applications. "
Are you using Zope?
>
> That said, it seems highly probable, if not a given, that there comes a point where the overhead of handling all those connections starts slowing things down, and not just for the new connection being made. How to figure out where that point is for my system, and how close to it I am at the moment, is a large part of what I am wondering.
>
> Note also that I did realize I was completely wrong about the initial issue - it turned out it was a network issue, not a postgresql one. Still, I think my specific questions still apply, if only in an academic sense now :-)
>
> -----------------------------------------------
> Israel Brewster
> Systems Analyst II
> Ravn Alaska
> 5245 Airport Industrial Rd
> Fairbanks, AK 99709
> (907) 450-7293
> -----------------------------------------------
>
>
>>
>>
>>
>> --
>> john r pierce, recycling bits in santa cruz
>>
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>
>
>
--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Melvin Davidson | 2016-09-27 18:55:25 | Re: Determining server load |
Previous Message | Israel Brewster | 2016-09-27 18:46:27 | Re: Determining server load |