| From: | Grzegorz Jaśkiewicz <gryzman(at)gmail(dot)com> |
|---|---|
| To: | Matt Magoffin <postgresql(dot)org(at)msqr(dot)us> |
| Cc: | pgsql-general(at)postgresql(dot)org |
| Subject: | Re: Out of memory on SELECT in 8.3.5 |
| Date: | 2009-02-09 08:32:53 |
| Message-ID: | 2f4958ff0902090032i1f4ce95drb33157772cb61cfb@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
On Mon, Feb 9, 2009 at 8:23 AM, Matt Magoffin <postgresql(dot)org(at)msqr(dot)us> wrote:
> I just noticed something: the "open files" limit lists 1024, which is the
> default for this system. A quick count of open data files currently in use
> by Postgres returns almost 7000, though.
>
> [root(at)170226-db7 ~]# lsof -u postgres |egrep
> '(/pg_data|/pg_index|/pg_log)' |wc -l
> 6749
>
> We have 100+ postgres processes running, so for an individual process,
> could the 1024 file limit be doing anything to this query? Or would I see
> an explicit error message regarding this condition?
you would get one of "Open files rlimit 1024 reached for uid xxxx" in
syslog (which you should checkout anyhow).
I wonder if it isn't just another one of those 'this only happends on
64bit machine' problems :)
--
GJ
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Matt Magoffin | 2009-02-09 08:53:11 | Re: Out of memory on SELECT in 8.3.5 |
| Previous Message | Matt Magoffin | 2009-02-09 08:23:08 | Re: Out of memory on SELECT in 8.3.5 |