From: | Francisco Olarte <folarte(at)peoplecall(dot)com> |
---|---|
To: | Costa Alexoglou <costa(at)dbtune(dot)com> |
Cc: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Vacuum full connection exhaustion |
Date: | 2024-08-08 11:43:54 |
Message-ID: | CA+bJJbwy5-HywbpWKgepJ+57f5465Bs7Khbsb+LYUbuo6=DSpg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, 8 Aug 2024 at 11:18, Costa Alexoglou <costa(at)dbtune(dot)com> wrote:
...
> So I am running Benchbase (a benchmark framework) with 50 terminals (50 concurrent connections).
> There are 2-3 additional connections, one for a postgres-exporter container for example.
...
> So far so good, and with a `max_connections` at 100 there is no problem. What happens is that if I execute manually `VACUUM FULL` the connections are exhausted.
> Also tried this with 150 `max_connections` to see if it just “doubles” the current connections, but as it turned out, it still exhausted all the connections until it reached `max_connections`.
> This was cross-checked, as the postgres-exporter could not connect, and I manually was not allowed to connect with `psql`.
Have you tried to check where the connections are coming from and what
are they doing? Apart from the max-paralell-worker stuff already
commented by Ron in an scenario with a long live locking processes (
vacuum full ) combined with potentially aggresive connecting ( a
benchmark tool ) I would verify the benchmark tool is not timing out
and disconnecting improperly leaving connections hung up.
Francisco Olarte.
From | Date | Subject | |
---|---|---|---|
Next Message | Christophe Pettus | 2024-08-08 14:11:21 | Re: Vacuum full connection exhaustion |
Previous Message | Ron Johnson | 2024-08-08 10:22:17 | Re: Vacuum full connection exhaustion |