From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Ed L(dot)" <pgsql(at)bluepolka(dot)net> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: open file counts in 8.1.2? |
Date: | 2006-03-14 18:13:04 |
Message-ID: | 21223.1142359984@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"Ed L." <pgsql(at)bluepolka(dot)net> writes:
> If we want to handle 16 clusters on this one box, each
> with 300 max_connections and 2000 relations, would it be
> ball-park reasonable to say that worst case we might have 300
> backends with ~2000 open file descriptors each (300 * 2000 =
> 600K open files per cluster, 600K * 16 clusters = 10M open
> files)?
No, an individual backend should never exceed max_files_per_process open
files (1000 by default). It will feel free to go up that high, though,
if it has reason to touch that many database files over its lifetime.
1000 is probably much higher than you really need for reasonable
performance; I'd be inclined to cut it to a couple hundred at most if
you need to sustain large numbers of backends. I dunno what sort of
penalties the kernel might have for millions of open files but there
probably are some ...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Fuhr | 2006-03-14 18:26:17 | Re: Dynamic function execution? |
Previous Message | Brendan Duddridge | 2006-03-14 18:08:39 | Re: Clustered PostgreSQL |