From: | Peter Eisentraut <peter_e(at)gmx(dot)net> |
---|---|
To: | Csaba Nagy <nagy(at)ecircle-ag(dot)com> |
Cc: | Michal Taborsky <michal(at)taborsky(dot)cz>, Postgres general mailing list <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Thousands of parallel connections |
Date: | 2004-08-16 14:37:59 |
Message-ID: | 200408161637.59692.peter_e@gmx.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Am Montag, 16. August 2004 16:20 schrieb Csaba Nagy:
> Peter is definitely not a newby on this list, so i'm sure he already
> thought about some kind of pooling if applicable... but then I'm
> dead-curious what kind of application could possibly rule out connection
> pooling even if it means so many open connections ? Please give us some
> light Peter...
There is already a connection pool in front of the real server, but the
connection pool doesn't help you if you have in fact 10000 concurrent
requests, it only saves connection start effort. (You could make the
connection pool server queue the requests, but that is not the point of this
exercise.) I didn't quite consider the RAM question, but the machine is
almost big enough that it wouldn't matter. I'm thinking more in terms of the
practical limits of the internal structures or the (Linux 2.6) kernel.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-08-16 14:39:18 | Re: could not find block containing chunk 0x8483530 |
Previous Message | Tom Lane | 2004-08-16 14:36:45 | Re: Does a 'stable' deferred trigger execution order exist? Is housekeeping for deferred trigger fire events done in one of the system catalogues? |