Benefit to increasing max_files_per_process?

From: Ron Johnson <ronljohnsonjr(at)gmail(dot)com>
To: Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org>
Subject: Benefit to increasing max_files_per_process?
Date: 2024-12-13 16:35:55
Message-ID: CANzqJaCicbLFV9emsTQO4mJGdGVkCo7F5Un6oUacjqOSSEj7fA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

PG 14.13 on RHEL 8.10

max_files_per_process is the default 1000.

Currently, many connection processes are hovering around 990 open files.

Does PG purposefully close some unused files when getting near the max?
Would we see benefits to increasing max_files_per_process as well as
running "ulimit -n 2500" (and then of course restarting PG).

I don't see any "too many open files" errors in the log file, but am trying
to plan ahead.

--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!

Browse pgsql-admin by date

  From Date Subject
Next Message kamal deen 2024-12-13 16:45:45 Re: Postgres DB crashing in OpenSSL 3.2 Version
Previous Message Tom Lane 2024-12-13 16:34:49 Re: Postgres DB crashing in OpenSSL 3.2 Version