From: | Ryan Lambert <ryan(at)rustprooflabs(dot)com> |
---|---|
To: | Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru> |
Cc: | DEV_OPS <devops(at)ww-it(dot)cn>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Built-in connection pooler |
Date: | 2019-08-07 12:52:26 |
Message-ID: | CAN-V+g_3=8PgYJdjGPFYgeuVqc7JXYibTMY8RM0N7Ka5Bk5AHA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> First of all default value of this parameter is 1000, not 10.
Oops, my bad! Sorry about that, I'm not sure how I got that in my head
last night but I see how that would make it act strange now. I'll adjust
my notes before re-testing. :)
Thanks,
*Ryan Lambert*
On Wed, Aug 7, 2019 at 4:57 AM Konstantin Knizhnik <
k(dot)knizhnik(at)postgrespro(dot)ru> wrote:
> Hi Ryan,
>
> On 07.08.2019 6:18, Ryan Lambert wrote:
> > Hi Konstantin,
> >
> > I did some testing with the latest patch [1] on a small local VM with
> > 1 CPU and 2GB RAM with the intention of exploring pg_pooler_state().
> >
> > Configuration:
> >
> > idle_pool_worker_timeout = 0 (default)
> > connection_proxies = 2
> > max_sessions = 10 (default)
> > max_connections = 1000
> >
> > Initialized pgbench w/ scale 10 for the small server.
> >
> > Running pgbench w/out connection pooler with 300 connections:
> >
> > pgbench -p 5432 -c 300 -j 1 -T 60 -P 15 -S bench_test
> > starting vacuum...end.
> > progress: 15.0 s, 1343.3 tps, lat 123.097 ms stddev 380.780
> > progress: 30.0 s, 1086.7 tps, lat 155.586 ms stddev 376.963
> > progress: 45.1 s, 1103.8 tps, lat 156.644 ms stddev 347.058
> > progress: 60.6 s, 652.6 tps, lat 271.060 ms stddev 575.295
> > transaction type: <builtin: select only>
> > scaling factor: 10
> > query mode: simple
> > number of clients: 300
> > number of threads: 1
> > duration: 60 s
> > number of transactions actually processed: 63387
> > latency average = 171.079 ms
> > latency stddev = 439.735 ms
> > tps = 1000.918781 (including connections establishing)
> > tps = 1000.993926 (excluding connections establishing)
> >
> >
> > It crashes when I attempt to run with the connection pooler, 300
> > connections:
> >
> > pgbench -p 6543 -c 300 -j 1 -T 60 -P 15 -S bench_test
> > starting vacuum...end.
> > connection to database "bench_test" failed:
> > server closed the connection unexpectedly
> > This probably means the server terminated abnormally
> > before or while processing the request.
> >
> > In the logs I get:
> >
> > WARNING: PROXY: Failed to add new client - too much sessions: 18
> > clients, 1 backends. Try to increase 'max_sessions' configuration
> > parameter.
> >
> > The logs report 1 backend even though max_sessions is the default of
> > 10. Why is there only 1 backend reported? Is that error message
> > getting the right value?
> >
> > Minor grammar fix, the logs on this warning should say "too many
> > sessions" instead of "too much sessions."
> >
> > Reducing pgbench to only 30 connections keeps it from completely
> > crashing but it still does not run successfully.
> >
> > pgbench -p 6543 -c 30 -j 1 -T 60 -P 15 -S bench_test
> > starting vacuum...end.
> > client 9 aborted in command 1 (SQL) of script 0; perhaps the backend
> > died while processing
> > client 11 aborted in command 1 (SQL) of script 0; perhaps the backend
> > died while processing
> > client 13 aborted in command 1 (SQL) of script 0; perhaps the backend
> > died while processing
> > ...
> > ...
> > progress: 15.0 s, 5734.5 tps, lat 1.191 ms stddev 10.041
> > progress: 30.0 s, 7789.6 tps, lat 0.830 ms stddev 6.251
> > progress: 45.0 s, 8211.3 tps, lat 0.810 ms stddev 5.970
> > progress: 60.0 s, 8466.5 tps, lat 0.789 ms stddev 6.151
> > transaction type: <builtin: select only>
> > scaling factor: 10
> > query mode: simple
> > number of clients: 30
> > number of threads: 1
> > duration: 60 s
> > number of transactions actually processed: 453042
> > latency average = 0.884 ms
> > latency stddev = 7.182 ms
> > tps = 7549.373416 (including connections establishing)
> > tps = 7549.402629 (excluding connections establishing)
> > Run was aborted; the above results are incomplete.
> >
> > Logs for that run show (truncated):
> >
> >
> > 2019-08-07 00:19:37.707 UTC [22152] WARNING: PROXY: Failed to add new
> > client - too much sessions: 18 clients, 1 backends. Try to increase
> > 'max_sessions' configuration parameter.
> > 2019-08-07 00:31:10.467 UTC [22151] WARNING: PROXY: Failed to add new
> > client - too much sessions: 15 clients, 4 backends. Try to increase
> > 'max_sessions' configuration parameter.
> > 2019-08-07 00:31:10.468 UTC [22152] WARNING: PROXY: Failed to add new
> > client - too much sessions: 15 clients, 4 backends. Try to increase
> > 'max_sessions' configuration parameter.
> > ...
> > ...
> >
> >
> > Here it is reporting fewer clients with more backends. Still, only 4
> > backends reported with 15 clients doesn't seem right. Looking at the
> > results from pg_pooler_state() at the same time (below) showed 5 and 7
> > backends for the two different proxies, so why are the logs only
> > reporting 4 backends when pg_pooler_state() reports 12 total?
> >
> > Why is n_idle_clients negative? In this case it showed -21 and -17.
> > Each proxy reported 7 clients, with max_sessions = 10, having those
> > n_idle_client results doesn't make sense to me.
> >
> >
> > postgres=# SELECT * FROM pg_pooler_state();
> > pid | n_clients | n_ssl_clients | n_pools | n_backends |
> > n_dedicated_backends | n_idle_backends | n_idle_clients | tx_bytes |
> > rx_bytes | n_transactions
> >
> >
> -------+-----------+---------------+---------+------------+----------------------+-----------------+----------------+----------+----------+---------------
> > -
> > 25737 | 7 | 0 | 1 | 5 |
> > 0 | 0 | -21 | 4099541 | 3896792 |
> > 61959
> > 25738 | 7 | 0 | 1 | 7 |
> > 0 | 2 | -17 | 4530587 | 4307474 |
> > 68490
> > (2 rows)
> >
> >
> > I get errors running pgbench down to only 20 connections with this
> > configuration. I tried adjusting connection_proxies = 1 and it handles
> > even fewer connections. Setting connection_proxies = 4 allows it to
> > handle 20 connections without error, but by 40 connections it starts
> > having issues.
> >
> > While I don't have expectations of this working great (or even decent)
> > on a tiny server, I don't expect it to crash in a case where the
> > standard connections work. Also, the logs and the function both show
> > that the total backends is less than the total available and the two
> > don't seem to agree on the details.
> >
> > I think it would help to have details about the pg_pooler_state
> > function added to the docs, maybe in this section [2]?
> >
> > I'll take some time later this week to examine pg_pooler_state further
> > on a more appropriately sized server.
> >
> > Thanks,
> >
> >
> > [1]
> >
> https://www.postgresql.org/message-id/attachment/103046/builtin_connection_proxy-16.patch
> > [2] https://www.postgresql.org/docs/current/functions-info.html
> >
> > Ryan Lambert
> >
>
> Sorry, looks like there is misunderstanding with meaning of max_sessions
> parameters.
> First of all default value of this parameter is 1000, not 10.
> Looks like you have explicitly specify value 10 and it cause this problems.
>
> So "max_sessions" parameter specifies how much sessions can be handled
> by one backend.
> Certainly it makes sense only if pooler is switched on (number of
> proxies is not zero).
> If pooler is switched off, than backend is handling exactly one session/
>
> There is no much sense in limiting number of sessions server by one
> backend, because the main goal of connection pooler is to handle arbitrary
> number of client connections with limited number of backends.
> The only reason for presence of this parameter is that WaitEventSet
> requires to specify maximal number of events.
> And proxy needs to multiplex connected backends and clients. So it
> create WaitEventSet with size max_sessions*2 (mutiplied by two because
> it has to listen both for clients and backends).
>
> So the value of this parameter should be large enough. Default value is
> 1000, but there should be no problem to set it to 10000 or even 1000000
> (hoping that IS will support it).
>
> But observer behavior ("server closed the connection unexpectedly" and
> hegative number of idle clients) is certainly not correct.
> I attached to this mail patch which is fixing both problems: correctly
> reports error to the client and calculates number of idle clients).
> New version also available in my GIT repoistory:
> https://github.com/postgrespro/postgresql.builtin_pool.git
> branch conn_proxy.
>
>
>
>
> --
> Konstantin Knizhnik
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Sehrope Sarkuni | 2019-08-07 12:56:18 | Re: [Proposal] Table-level Transparent Data Encryption (TDE) and Key Management Service (KMS) |
Previous Message | Peter Eisentraut | 2019-08-07 12:35:54 | initdb: Use varargs macro for PG_CMD_PRINTF |