From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au> |
Cc: | Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: PostgreSQL running out of file handles |
Date: | 2005-05-13 03:20:35 |
Message-ID: | 19492.1115954435@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au> writes:
> A few days back the load increased on our database server to the point
> where it could not get enough file handles. This causes the backends to
> crash, get restarted only to crash again, on and on.
> We fixed it by bumping kern.maxfiles, but was just wondering if this is
> a scenario that PostgreSQL should handle more gracefully?
I suppose you are running on some BSD variant? BSD is notorious for
promising more than it can deliver with respect to number of open files
per process. This is a kernel bug, not a Postgres bug.
You can adjust Postgres' max_files_per_process setting to compensate for
the kernel's lying about its capabilities.
(Postgres is in fact one of the most robust applications I know of
in terms of not going belly-up in response to EMFILE or ENFILE.
However, if there are not any spare descriptors to close, there's
not a lot we can do except fail.)
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Christopher Kings-Lynne | 2005-05-13 03:27:11 | Re: PostgreSQL running out of file handles |
Previous Message | Alvaro Herrera | 2005-05-13 03:16:46 | Re: libpq lo_open errors when first action in connection |