From: | Costa Alexoglou <costa(at)dbtune(dot)com> |
---|---|
To: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Vacuum full connection exhaustion |
Date: | 2024-08-07 17:34:12 |
Message-ID: | CAJ+5Ff6WtFCzamrRZqN3u3htvkGcmob7VaYWYBd+sRx6jKpHuA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hey folks,
I noticed something weird, and not sure if this is the expected behaviour
or not in PostgreSQL.
So I am running Benchbase (a benchmark framework) with 50 terminals (50
concurrent connections).
There are 2-3 additional connections, one for a postgres-exporter container
for example.
So far so good, and with a `max_connections` at 100 there is no problem.
What happens is that if I execute manually `VACUUM FULL` the connections
are exhausted.
Also tried this with 150 `max_connections` to see if it just “doubles” the
current connections, but as it turned out, it still exhausted all the
connections until it reached `max_connections`.
This was cross-checked, as the postgres-exporter could not connect, and I
manually was not allowed to connect with `psql`.
Is this expected or is this a bug?
postgres-exporter logs:
```
sql: error: connection to server on socket
"/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: sorry, too many clients
already
```
From | Date | Subject | |
---|---|---|---|
Next Message | sud | 2024-08-07 19:35:56 | Re: Column type modification in big tables |
Previous Message | Ron Johnson | 2024-08-07 13:52:49 | Re: data checksums |