Re: maximum number of client connections?

From: Steve Crawford <scrawford(at)pinpointresearch(dot)com>
To: Mark Harrison <mh(at)pixar(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: maximum number of client connections?
Date: 2003-10-16 21:51:15
Message-ID: 200310161451.15631.scrawford@pinpointresearch.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thursday 16 October 2003 11:47 am, Mark Harrison wrote:
> We have the situation where it would be convenient if we
> could support a large number (>1024, possibly in the 2000-3000
> range) of client connections.
>
> What are our options for this?

I suspect your best bet is some sort of connection pooling. Even if
you got PostgreSQL to compile with that many clients, each client
requires a separate backend process so you would have to make sure
that your OS and hardware wouldn't choke when running that many
processes.

I'd start by looking at SQL Relay. It seems to rate pretty well on
Freshmeat and seems to have ongoing development.

There's also dbBalancer. It's supposed to handle native PostgreSQL
pooling but unfortunately it's still listed as "alpha" and no updates
have been posted in about a year so it may be dead.

If you can add a layer to your app you could try aolserver. It's
high-performance, open-source and supports connection pooling
out-of-the-box but would require communicating from the client side
via http.

There's also the connection pooling available in Java.

All of this pooling talk assumes, of course, that your clients aren't
so active that the pool of "real" connections becomes exhausted.

Just a few places to look. Others may have ideas as well.

Cheers,
Steve

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Gaetano Mendola 2003-10-16 22:51:19 Re: Damaged table
Previous Message Stephen 2003-10-16 21:38:58 Re: VACUUM degrades performance significantly. Database