From: | KONDO Mitsumasa <kondo(dot)mitsumasa(at)lab(dot)ntt(dot)co(dot)jp> |
---|---|
To: | Andres Freund <andres(at)2ndquadrant(dot)com> |
Cc: | Amit Langote <amitlangote09(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, Postgres General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: [GENERAL] Bottlenecks with large number of relation segment files |
Date: | 2013-08-06 10:09:06 |
Message-ID: | 5200CB42.1090402@lab.ntt.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
(2013/08/05 19:28), Andres Freund wrote:
> On 2013-08-05 18:40:10 +0900, KONDO Mitsumasa wrote:
>> (2013/08/05 17:14), Amit Langote wrote:
>>> So, within the limits of max_files_per_process, the routines of file.c
>>> should not become a bottleneck?
>> It may not become bottleneck.
>> 1 FD consumes 160 byte in 64bit system. See linux manual at "epoll".
>
> That limit is about max_user_watches, not the general cost of an
> fd. Afair they take up a a good more than that.
OH! It's my mistake... I retry to read about FD in linux manual at "proc".
It seems that a process having FD can see in /proc/[pid]/fd/.
And it seems symbolic link and consume 64byte memory per FD.
Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center
From | Date | Subject | |
---|---|---|---|
Next Message | KONDO Mitsumasa | 2013-08-06 10:19:41 | Re: [GENERAL] Bottlenecks with large number of relation segment files |
Previous Message | liuyuanyuan | 2013-08-06 07:12:53 | inserting huge file into bytea cause out of memory |
From | Date | Subject | |
---|---|---|---|
Next Message | KONDO Mitsumasa | 2013-08-06 10:19:41 | Re: [GENERAL] Bottlenecks with large number of relation segment files |
Previous Message | Dimitri Fontaine | 2013-08-06 09:14:33 | Re: Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: Proposal for Allow postgresql.conf values to be changed via SQL [review]) |