From: | Andres Freund <andres(at)2ndquadrant(dot)com> |
---|---|
To: | KONDO Mitsumasa <kondo(dot)mitsumasa(at)lab(dot)ntt(dot)co(dot)jp> |
Cc: | Amit Langote <amitlangote09(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, Postgres General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Bottlenecks with large number of relation segment files |
Date: | 2013-08-05 10:28:41 |
Message-ID: | 20130805102841.GD542@alap2.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
On 2013-08-05 18:40:10 +0900, KONDO Mitsumasa wrote:
> (2013/08/05 17:14), Amit Langote wrote:
> >So, within the limits of max_files_per_process, the routines of file.c
> >should not become a bottleneck?
> It may not become bottleneck.
> 1 FD consumes 160 byte in 64bit system. See linux manual at "epoll".
That limit is about max_user_watches, not the general cost of an
fd. Afair they take up a a good more than that. Also, there are global
limits to the amount of filehandles that can simultaneously opened on a
system.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | hamann.w | 2013-08-05 11:09:44 | Re: incremental dumps |
Previous Message | Krzysztof xaru Rajda | 2013-08-05 09:44:39 | [tsearch2] Problem with case sensitivity (or with creating own dictionary) |
From | Date | Subject | |
---|---|---|---|
Next Message | Florian Weimer | 2013-08-05 11:38:41 | Re: Bottlenecks with large number of relation segment files |
Previous Message | KONDO Mitsumasa | 2013-08-05 09:40:10 | Re: Bottlenecks with large number of relation segment files |