Hitting the nfile limit

From: Michael Brusser <michael(at)synchronicity(dot)com>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Hitting the nfile limit
Date: 2003-07-04 17:40:23
Message-ID: DEEIJKLFNJGBEMBLBAHCMEKHDFAA.michael@synchronicity.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

We ran into problem while load-testing 7.3.2 server.
>From the database log:

FATAL: cannot open /home/<some_path>/postgresql/PG_VERSION:
File table overflow

The QA engineer who ran the test claims that after server was restarted
one record on the database was missing.

We are not sure what exactly happened. He was running about 10 servers
on HP-11, hitting them with AstraLoad. Most requests would try to update
some
record on the database, most run with Serializable Isolation Level.
Apparently we managed to run out of the open file descriptors on the host
machine.

I wonder how Postgres handles this situation.
(Or power outage, or any hard system fault, at this point)

Is it possible that we really lost a record because of that?
Should we consider changing default WAL_SYNC_METHOD?

Thanks in advance,
Michael.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Joe Conway 2003-07-04 18:02:13 Re: Compile error in current cvs (~1230 CDT July 4)
Previous Message Vincent van Leeuwen 2003-07-04 17:40:16 pg_autovacuum bug and feature request