From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Dilek Küçük <dilekkucuk(at)gmail(dot)com> |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: max_files_per_process limit |
Date: | 2008-11-10 15:24:24 |
Message-ID: | 6567.1226330664@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
"=?ISO-8859-1?Q?Dilek_K=FC=E7=FCk?=" <dilekkucuk(at)gmail(dot)com> writes:
> We have a database of about 62000 tables (about 2000 tablespaces) with an
> index on each table. Postgresql version is 8.1.
You should probably rethink that schema. A lot of similar tables can be
folded into one table with an extra key column. Also, where did you get
the idea that 2000 tablespaces would be a good thing? There's really no
point in more than one per spindle or filesystem.
> Although after the initial inserts to about 32000 tables the subsequent
> inserts are considerable fast, subsequent inserts to more than 32000 tables
> are very slow.
This has probably got more to do with inefficiencies of your filesystem
than anything else --- did you pick one that scales well to lots of
files per directory?
> This seems to be due to the datatype (integer) of max_files_per_process
> option in the postgres.conf file which is used to set the maximum number of
> open file descriptors.
It's not so much the datatype of max_files_per_process as the datatype
of kernel file descriptors that's the limitation ...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Dana Holland | 2008-11-10 15:50:09 | installing without shell access |
Previous Message | Tomeh, Husam | 2008-11-10 15:23:20 | Re: Number of Current Client Connections |