From: | Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp> |
---|---|
To: | tgl(at)sss(dot)pgh(dot)pa(dot)us |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Re: Too many open files (was Re: spinlock problems reported earlier) |
Date: | 2000-12-24 02:42:45 |
Message-ID: | 20001224114245Z.t-ishii@sra.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> Department of Things that Fell Through the Cracks:
>
> Back in August we had concluded that it is a bad idea to trust
> "sysconf(_SC_OPEN_MAX)" as an indicator of how many files each backend
> can safely open. FreeBSD was reported to return 4136, and I have
> since noticed that LinuxPPC returns 1024. Both of those are
> unreasonably large fractions of the actual kernel file table size.
> A few dozen backends opening hundreds of files apiece will fill the
> kernel file table on most Unix platforms.
>
> I'm not sure why this didn't get dealt with, but I think it's a "must
> fix" kind of problem for 7.1. The dbadmin has *got* to be able to
> limit Postgres' appetite for open file descriptors.
>
> I propose we add a new configuration parameter, MAX_FILES_PER_PROCESS,
> with a default value of about 100. A new backend would set its
> max-files setting to the smaller of this parameter or
> sysconf(_SC_OPEN_MAX).
Seems nice idea. We have been heard lots of problem reports caused by
ruuning out of the file table.
However it would be even nicer, if it could be configurable at runtime
(at the postmaster starting up time) like -N option. Maybe
MAX_FILES_PER_PROCESS can be a hard limit?
--
Tatsuo Ishii
From | Date | Subject | |
---|---|---|---|
Next Message | Mike S. Avelar | 2000-12-24 03:13:03 | Java and Postgresql |
Previous Message | Alfred Perlstein | 2000-12-24 00:24:17 | Re: Re: Too many open files (was Re: spinlock problems reported earlier) |