From: | Mark Roberts <mailing_lists(at)pandapocket(dot)com> |
---|---|
To: | pgsql-sql <pgsql-sql(at)postgresql(dot)org> |
Subject: | Re: more than 1000 connections |
Date: | 2008-08-06 15:54:40 |
Message-ID: | 1218038080.28304.20.camel@localhost |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
On Wed, 2008-08-06 at 08:06 +0800, Craig Ringer wrote:
> Out of interest - why 1000 connections?
>
> Do you really expect to have 1000 jobs concurrently active and doing
> work? If you don't, then you'll be wasting resources and slowing
> things
> down for no reason. There is a connection overhead in PostgreSQL -
> IIRC
> mostly related to database-wide locking and synchronization, but also
> some memory for each backend - that means you probably shouldn't run
> vastly more backends than you intend to have actively working.
>
> If you described your problem, perhaps someone could give you a useful
> answer. Your mention of pgpool suggests that you're probably using a
> web
> app and running into connection count limits, but I shouldn't have to
> guess that.
>
> --
> Craig Ringer
This is actually a fantastic point. Have you considered using more than
one box to field the connections and using some sort of replication or
worker process to move them to a master database of some sort? I don't
know about the feasibility of it, but it might work out depending on
what kind of application you're trying to write.
Disclaimer: I work in a data warehousing and we only have 45 concurrent
connections right now. OLTP and/or large connection counts isn't really
what I spend my days thinking about. ;-)
-Mark
From | Date | Subject | |
---|---|---|---|
Next Message | Jorge Medina | 2008-08-06 16:42:54 | Re: more than 1000 connections |
Previous Message | Terry Lee Tucker | 2008-08-06 12:44:11 | Re: Case Insensitive searches |