From: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
---|---|
To: | Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru> |
Cc: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Built-in connection pooling |
Date: | 2018-01-19 17:03:16 |
Message-ID: | CAGTBQpYf84L0m+7_HRdNOzgddagx4EDjn+3E0acPSJ5ZBmZNpw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Jan 19, 2018 at 1:53 PM, Konstantin Knizhnik <
k(dot)knizhnik(at)postgrespro(dot)ru> wrote:
>
>
> On 19.01.2018 19:28, Pavel Stehule wrote:
>
>
>
> When I've been thinking about adding a built-in connection pool, my
>>> rough plan was mostly "bgworker doing something like pgbouncer" (that
>>> is, listening on a separate port and proxying everything to regular
>>> backends). Obviously, that has pros and cons, and probably would not
>>> work serve the threading use case well.
>>>
>>
>> And we will get the same problem as with pgbouncer: one process will not
>> be able to handle all connections...
>> Certainly it is possible to start several such scheduling bgworkers...
>> But in any case it is more efficient to multiplex session in backend
>> themselves.
>>
>
> pgbouncer hold all time client connect. When we implement the listeners,
> then all work can be done by worker processes not by listeners.
>
>
> Sorry, I do not understand your point.
> In my case pgbench establish connection to the pgbouncer only once at the
> beginning of the test.
> And pgbouncer spends all time in context switches (CPU usage is 100% and
> it is mostly in kernel space: top of profile are kernel functions).
> The same picture will be if instead of pgbouncer you will do such
> scheduling in one bgworker.
> For the modern systems are not able to perform more than several hundreds
> of connection switches per second.
> So with single multiplexing thread or process you can not get speed more
> than 100k, while at powerful NUMA system it is possible to achieve millions
> of TPS.
> It is illustrated by the results I have sent in the previous mail: by
> spawning 10 instances of pgbouncer I was able to receive 7 times bigger
> speed.
>
I'm sure pgbouncer can be improved. I've seen async code handle millions of
packets per second (zmq), pgbouncer shouldn't be radically different.
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2018-01-19 17:06:30 | Re: Built-in connection pooling |
Previous Message | Tomas Vondra | 2018-01-19 17:03:11 | Re: Built-in connection pooling |