Re: Out of memory on SELECT in 8.3.5

From: "Matt Magoffin" <postgresql(dot)org(at)msqr(dot)us>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Stephen Frost" <sfrost(at)snowman(dot)net>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Out of memory on SELECT in 8.3.5
Date: 2009-02-09 08:23:08
Message-ID: 51736.192.168.1.106.1234167788.squirrel@msqr.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> [root(at)170226-db7 ~]# su -l postgres -c "ulimit -a"
> core file size (blocks, -c) 0
> data seg size (kbytes, -d) unlimited
> max nice (-e) 0
> file size (blocks, -f) unlimited
> pending signals (-i) 139264
> max locked memory (kbytes, -l) 32
> max memory size (kbytes, -m) unlimited
> open files (-n) 1024
> pipe size (512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> max rt priority (-r) 0
> stack size (kbytes, -s) 10240
> cpu time (seconds, -t) unlimited
> max user processes (-u) 139264
> virtual memory (kbytes, -v) unlimited
> file locks (-x) unlimited

I just noticed something: the "open files" limit lists 1024, which is the
default for this system. A quick count of open data files currently in use
by Postgres returns almost 7000, though.

[root(at)170226-db7 ~]# lsof -u postgres |egrep
'(/pg_data|/pg_index|/pg_log)' |wc -l
6749

We have 100+ postgres processes running, so for an individual process,
could the 1024 file limit be doing anything to this query? Or would I see
an explicit error message regarding this condition?

Regards,
Matt

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Grzegorz Jaśkiewicz 2009-02-09 08:32:53 Re: Out of memory on SELECT in 8.3.5
Previous Message Matt Magoffin 2009-02-09 08:08:45 Re: Out of memory on SELECT in 8.3.5