From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | michael(at)synchronicity(dot)com |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Hitting the nfile limit |
Date: | 2003-07-04 18:02:21 |
Message-ID: | 2455.1057341741@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Michael Brusser <michael(at)synchronicity(dot)com> writes:
> Apparently we managed to run out of the open file descriptors on the host
> machine.
This is pretty common if you set a large max_connections value while
not doing anything to raise the kernel nfile limit. Postgres will
follow what the kernel tells it is a safe number of open files per
process, but far too many kernels lie through their teeth about what
they can support :-(
You can reduce max_files_per_process in postgresql.conf to keep Postgres
from believing what the kernel says. I'd recommend making sure that
max_connections * max_files_per_process is comfortably less than the
kernel nfiles setting (don't forget the rest of the system wants to have
some files open too ;-))
> I wonder how Postgres handles this situation.
> (Or power outage, or any hard system fault, at this point)
Theoretically we should be able to recover from this without loss of
committed data (assuming you were running with fsync on). Is your QA
person certain that the record in question had been written by a
successfully-committed transaction?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Arjen van der Meijden | 2003-07-04 18:07:18 | Re: PostgreSQL vs. MySQL |
Previous Message | Joe Conway | 2003-07-04 18:02:13 | Re: Compile error in current cvs (~1230 CDT July 4) |