From: | Achilleas Mantzios <achill(at)matrix(dot)gatewaynet(dot)com> |
---|---|
To: | pgsql-admin(at)postgresql(dot)org |
Cc: | "Dilek =?utf-8?q?K=C3=BC=C3=A7=C3=BCk?=" <dilekkucuk(at)gmail(dot)com> |
Subject: | Re: max_files_per_process limit |
Date: | 2008-11-10 14:51:16 |
Message-ID: | 200811101651.16538.achill@matrix.gatewaynet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Στις Monday 10 November 2008 16:18:37 ο/η Dilek Küçük έγραψε:
> Hi,
>
> We have a database of about 62000 tables (about 2000 tablespaces) with an
> index on each table. Postgresql version is 8.1.
>
So you have about 62000 distinct schemata in your db?
Imagine that the average enterprise has about 200 tables max,
and an average sized country has about 300 such companies,
including public sector, with 62000 tables you could blindly model
.... the whole activity of a whole country.
Is this some kind of replicated data?
Whats the story?
Just curious.
> Although after the initial inserts to about 32000 tables the subsequent
> inserts are considerable fast, subsequent inserts to more than 32000 tables
> are very slow.
>
> This seems to be due to the datatype (integer) of max_files_per_process
> option in the postgres.conf file which is used to set the maximum number of
> open file descriptors.
> Is there anything we could do about this max_files_per_process limit or any
> other way to speed up inserts to all these tables?
>
> Any suggestions are wellcome.
>
> Kind regards,
> Dilek Küçük
>
--
Achilleas Mantzios
From | Date | Subject | |
---|---|---|---|
Next Message | Sean Brown | 2008-11-10 15:10:03 | Number of Current Client Connections |
Previous Message | Dilek Küçük | 2008-11-10 14:18:37 | max_files_per_process limit |