Re: Performance Woes

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Jeff Davis <pgsql(at)j-davis(dot)com>
Cc: "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>, CAJ CAJ <pguser(at)gmail(dot)com>, Ralph Mason <ralph(dot)mason(at)telogis(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance Woes
Date: 2007-05-10 04:30:04
Message-ID: 6685.1178771404@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Jeff Davis <pgsql(at)j-davis(dot)com> writes:
> On Wed, 2007-05-09 at 17:29 -0700, Joshua D. Drake wrote:
>> Sounds to me like you just need to up the total amount of open files
>> allowed by the operating system.

> It looks more like the opposite, here's the docs for
> max_files_per_process:

I think Josh has got the right advice. The manual is just saying that
you can reduce max_files_per_process to avoid the failure, but it's not
making any promises about the performance penalty for doing that.
Apparently Ralph's app needs a working set of between 800 and 1000 open
files to have reasonable performance.

> That is a lot of tables. Maybe a different OS will handle it better?
> Maybe there's some way that you can use fewer connections and then the
> OS could still handle it?

Also, it might be worth rethinking the database structure to reduce the
number of tables. But for a quick-fix, increasing the kernel limit
seems like the easiest answer.

regards, tom lane

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Daniel Haensse 2007-05-10 04:38:11 Background vacuum
Previous Message Alvaro Herrera 2007-05-10 02:05:08 Re: Performance Woes