From: | "Mark Alliban" <MarkA(at)idnltd(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Number of open files |
Date: | 2001-02-07 16:20:10 |
Message-ID: | 00f301c09121$d8a25740$6401a8c0@teledome.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> "Mark Alliban" <MarkA(at)idnltd(dot)com> writes:
> > I have increased file-max to 16000. However after about 24 hours of
running,
> > pgsql crashed and errors in the log showed that the system had run out
of
> > memory. I do not have the exact error message, as I was in a hurry to
get
> > the system up and running again (it is a live production system). The
system
> > has 512MB memory and there were 47 ODBC sessions in progress, so I
cannot
> > believe that the system *really* ran out of memory.
>
> Oh, I could believe that, depending on what your ODBC clients were
> doing. 10 meg of working store per backend is not out of line for
> complex queries. Have you tried watching with 'top' to see what a
> typical backend process size actually is for your workload?
>
> Also, the amount of RAM isn't necessarily the limiting factor here;
> what you should have told us is how much swap space you have ...
530MB of swap. top reports that the backends use around 17-19MB on average.
Are you saying then, that if I have 500 concurrent queries, I will need 8GB
of swap space? Is there any way to limit the amount of memory a backend can
use, and if there is, would it be a very bad idea to do it?
Thanks,
Mark.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2001-02-07 16:27:33 | Re: Number of open files |
Previous Message | Tom Lane | 2001-02-07 16:14:32 | Re: [SQL] Re: SQL Join - MySQL/PostgreSQL difference? |