From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Darin Fisher <darinf(at)pfm(dot)net> |
Cc: | pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: Too many open files |
Date: | 2001-08-01 17:52:50 |
Message-ID: | 10499.996688370@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
Darin Fisher <darinf(at)pfm(dot)net> writes:
> I am running PosgreSQL 7.1 on Redhat 6.2 Kernel 2.4.6.
> Under a pretty heavy load:
> 1000 Transactions per second
> 32 Open connections
> Everything restarts because of too many open files.
> I have increase my max number of open files to 16384 but this
> just delays the inevitable.
> I have tested the same scenario under Solaris 8 and it works
> fine.
Linux (and BSD) have a tendency to promise more than they can deliver
about how many files an individual process can open. Look at
pg_nofile() in src/backend/storage/file/fd.c --- it believes whatever
sysconf(_SC_OPEN_MAX) tells it, and on these OSes the answer is likely
to be several thousand. Which the OS can indeed support when *one*
backend does it, but not when dozens of 'em do it.
I have previously suggested that we should have a configurable upper
limit for the number-of-openable-files that we will believe --- probably
a GUC variable with a default value of, say, a couple hundred. No one's
gotten around to doing it, but if you'd care to submit a patch...
As a quick hack, you could just insert a hardcoded limit in
pg_nofile().
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Oleg Bartunov | 2001-08-01 18:13:17 | Re: Too many open files |
Previous Message | Darin Fisher | 2001-08-01 17:27:03 | Too many open files |