From: | John A Meinel <john(at)johnmeinel(dot)com> |
---|---|
To: | Martin Foster <martin(at)ethereal-realms(dot)org> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Restricting Postgres |
Date: | 2004-11-03 23:25:27 |
Message-ID: | 418968E7.40700@johnmeinel.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-performance |
Martin Foster wrote:
> Simon Riggs wrote:
>
>> On Tue, 2004-11-02 at 23:52, Martin Foster wrote:
[...]
> I've seen this behavior before when restarting the web server during
> heavy loads. Apache goes from zero connections to a solid 120,
> causing PostgreSQL to spawn that many children in a short order of time
> just to keep up with the demand.
>
But wouldn't limiting the number of concurrent connections do this at
the source. If you tell it that "You can at most have 20 connections"
you would never have postgres spawn 120 children.
I'm not sure what apache does if it can't get a DB connection, but it
seems exactly like what you want.
Now, if you expected to have 50 clients that all like to just sit on
open connections, you could leave the number of concurrent connections high.
But if your only connect is from the webserver, where all of them are
designed to be short connections, then leave the max low.
The other possibility is having the webserver use connection pooling, so
it uses a few long lived connections. But even then, you could limit it
to something like 10-20, not 120.
John
=:->
From | Date | Subject | |
---|---|---|---|
Next Message | Martin Foster | 2004-11-03 23:35:52 | Re: Restricting Postgres |
Previous Message | Tom Lane | 2004-11-03 22:56:41 | Re: oid file, but no pg_class row for it |
From | Date | Subject | |
---|---|---|---|
Next Message | Martin Foster | 2004-11-03 23:35:52 | Re: Restricting Postgres |
Previous Message | patrick ~ | 2004-11-03 22:22:57 | Re: vacuum analyze slows sql query |