From: | Csaba Nagy <nagy(at)ecircle-ag(dot)com> |
---|---|
To: | Michal Taborsky <michal(at)taborsky(dot)cz> |
Cc: | Peter Eisentraut <peter_e(at)gmx(dot)net>, Postgres general mailing list <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Thousands of parallel connections |
Date: | 2004-08-16 14:20:44 |
Message-ID: | 1092666044.944.5.camel@coppola.ecircle.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi guys,
Peter is definitely not a newby on this list, so i'm sure he already
thought about some kind of pooling if applicable... but then I'm
dead-curious what kind of application could possibly rule out connection
pooling even if it means so many open connections ? Please give us some
light Peter...
Cheers,
Csaba.
On Mon, 2004-08-16 at 15:53, Michal Taborsky wrote:
> Peter Eisentraut wrote:
> > Is there any practical limit on the number of parallel connections that a
> > PostgreSQL server can service? We're in the process of setting up a system
> > that will require up to 10000 connections open in parallel. The query load
> > is not the problem, but we're wondering about the number of connections.
> > Does anyone have experience with these kinds of numbers?
>
> No experience, but a little thinking and elementary school math tells
> me, that you'd need huge amount of RAM to support 10000 connections,
> since postgres is multi-process. Our typical postgres process eats 5-40
> megs of memory, depending on activity. So even if it was just 5 megs,
> with 10k connections we are talking about 50G of RAM. If these
> connections are idle, it would be plain waste of resources.
>
> I suggest you look into some sort of connection pooling.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-08-16 14:26:06 | Re: plpgsql NULL statement (was Re: [GENERAL] Postgres |
Previous Message | Marc G. Fournier | 2004-08-16 14:15:39 | Re: postgres in freebsd jail |