From: | Markus Wanner <markus(at)bluegap(dot)ch> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Itagaki Takahiro <itagaki(dot)takahiro(at)gmail(dot)com>, PostgreSQL-development Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: bg worker: general purpose requirements |
Date: | 2010-09-20 15:03:25 |
Message-ID: | 4C9777BD.9050305@bluegap.ch |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 09/18/2010 05:43 AM, Tom Lane wrote:
> The part of that that would worry me is open files. PG backends don't
> have any compunction about holding open hundreds of files. Apiece.
> You can dial that down but it'll cost you performance-wise. Last
> I checked, most Unix kernels still had limited-size FD arrays.
Thank you very much, that's a helpful hint.
I did some quick testing and managed to fork up to around 2000 backends,
at which point my (laptop) system got unresponsive. To be honest, that's
really surprising me.
(I had to increased the SHM and SEM kernel limits to be able to start
Postgres with that many processes at all. Obviously, Linux doesn't seem
to like that... on a second test I got a kernel panic)
> And as you say, ProcArray manipulations aren't going to be terribly
> happy about large numbers of idle backends, either.
Very understandable, yes.
Regards
Markus Wanner
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2010-09-20 15:12:43 | Do we need a ShmList implementation? |
Previous Message | Heikki Linnakangas | 2010-09-20 14:55:51 | Re: libpq changes for synchronous replication |