From: | Stuart Bishop <stuart(at)stuartbishop(dot)net> |
---|---|
To: | "List, Postgres" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: max_connections proposal |
Date: | 2011-05-27 09:08:49 |
Message-ID: | BANLkTi=dtaiUrSsN9n0OV7jX35VQKqv1uQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, May 27, 2011 at 6:22 AM, Craig Ringer
<craig(at)postnewspapers(dot)com(dot)au> wrote:
> Best performance is often obtained with the number of _active_ connections
> in the 10s to 30s on commonplace hardware. I'd want to use "hundreds" -
> because mailing list posts etc suggest that people start running into
> problems under load at the 400-500 mark, and more importantly because it's
> well worth moving to pooling _way_ before that point.
If you can. I'd love a connection pool that knows when I have a
resource that persists across transactions like a cursor or temporary
table and the backend connection needs to be maintained between
transactions, or if there are no such resources and the backend
connection can be released to the pool between transactions. I suspect
this sort of pool would need to be built into the core. At the moment
I only see a benefit with a pool from connections from my webapp which
I know can safely go through pgbouncer in transaction pooling mode.
Or would there be some way of detecting if the current session has
access to stuff that persists across transactions and this feature
could be added to the existing connection pools?
--
Stuart Bishop <stuart(at)stuartbishop(dot)net>
http://www.stuartbishop.net/
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2011-05-27 09:58:16 | Re: Feature request: Replicate only parts of a database |
Previous Message | Cédric Villemain | 2011-05-27 08:09:52 | Re: max_connections proposal |