From: | KONDO Mitsumasa <kondo(dot)mitsumasa(at)lab(dot)ntt(dot)co(dot)jp> |
---|---|
To: | Amit Langote <amitlangote09(at)gmail(dot)com> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, Postgres General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: [GENERAL] Bottlenecks with large number of relation segment files |
Date: | 2013-08-05 08:01:10 |
Message-ID: | 51FF5BC6.5000007@lab.ntt.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
Hi Amit,
(2013/08/05 15:23), Amit Langote wrote:
> May the routines in fd.c become bottleneck with a large number of
> concurrent connections to above database, say something like "pgbench
> -j 8 -c 128"? Is there any other place I should be paying attention
> to?
What kind of file system did you use?
When we open file, ext3 or ext4 file system seems to sequential search inode for
opening file in file directory.
And PostgreSQL limit FD 1000 per process. It seems too small.
Please change src/backend/storage/file/fd.c at "max_files_per_process = 1000;"
If we rewrite it, We can change limit of FD per process. I have already created
fix-patch about this problem in postgresql.conf, and will submit next CF.
Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Langote | 2013-08-05 08:14:24 | Re: [GENERAL] Bottlenecks with large number of relation segment files |
Previous Message | Amit Langote | 2013-08-05 06:23:19 | Bottlenecks with large number of relation segment files |
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Langote | 2013-08-05 08:14:24 | Re: [GENERAL] Bottlenecks with large number of relation segment files |
Previous Message | Etsuro Fujita | 2013-08-05 08:00:21 | Re: query_planner() API change |