From: | "Mark Alliban" <MarkA(at)idnltd(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Number of open files |
Date: | 2001-02-07 13:31:54 |
Message-ID: | 007001c0910a$56ed32e0$6401a8c0@teledome.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> "Mark Alliban" <MarkA(at)idnltd(dot)com> writes:
> > I am having problems with the number of open files on Redhat 6.1. The
value
> > of /proc/sys/fs/file-max is 4096 (the default), but this value is
reached
> > with about 50 ODBC connections. Increasing the file-max value would only
> > temporarily improve matters because on the long-term I expect to have
500+
> > active connections. How comes there are so many open files per
connection?
> > Is there any way to decrease the number of open files, so that I don't
have
> > to increase file-max to immense proportions?
>
> You can hack the routine pg_nofile() in src/backend/storage/file/fd.c
> to return some smaller number than it's returning now, but I really
> wouldn't advise reducing it below thirty or so. You'll still need to
> increase file-max.
>
> regards, tom lane
I have increased file-max to 16000. However after about 24 hours of running,
pgsql crashed and errors in the log showed that the system had run out of
memory. I do not have the exact error message, as I was in a hurry to get
the system up and running again (it is a live production system). The system
has 512MB memory and there were 47 ODBC sessions in progress, so I cannot
believe that the system *really* ran out of memory. I start postmaster
with -B 2048 -N 500, if that is relevant.
Also backends seem to hang around for about a minute after I close the ODBC
connections. Is this normal?
Thanks,
Mark.
From | Date | Subject | |
---|---|---|---|
Next Message | Holger Klawitter | 2001-02-07 13:37:57 | Re: Can you help a newbie? |
Previous Message | Richard Poole | 2001-02-07 13:23:47 | Re: Can you help a newbie? |