From: | Paul Lindner <lindner(at)inuus(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Paul Lindner <lindner(at)inuus(dot)com>, Oliver Jowett <oliver(at)opencloud(dot)com>, pgsql-jdbc(at)postgresql(dot)org |
Subject: | Re: Prepared Statements vs. pgbouncer |
Date: | 2007-09-29 07:21:59 |
Message-ID: | 20070929072159.GI3140@inuus.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
I appear to have stirred the pot a little too vigorously.. Let's take
a deep breath and take a step back..
First off, I really appreciate the hard work that's gone into the
design and implementation of Postgres and the JDBC driver. I realize
that what I'm trying to do falls outside of the norms -- hopefully the
following background information will help everyone understand what
I'm trying to achieve:
The environment:
* 100s of application servers using Torque and DBCP
* Dozens of databases.
* All app servers can connect to all databases.
* Each application server may need many connections to an individual
database.
* App code as written will spawn multiple concurrent SELECTs to
a single database to speed up queries on partitioned tables.
Okay.. So given those parameters we've been able to tune the system
to use about 2000 maxconns on the DBs, and a max of 8 connections from
each application server.
In spite of that the majority of connections are idle. So we kill off
idle backends with cron.. Not ideal, but it's worked so far..
Okay... now let's double or triple the number of application servers..
That means either:
1. Going to 4k or 6k backend maxconns.
2. Halving or thirding the number of simultaneous conns for each app server.
3. Use pgbouncer to allow 6k connections while actually
lowering the number of DB backends, plus giving us some very cool
maintenance features like redirecting connections to other hosts
and more.
So we we're trying to implement #3.
If others have better ideas I'm all ears.
Our pgbouncer config will keep a connection on the same backend
for the duration of an individual transaction.
The only thing holding us back from deploying the pgbouncer solution
is this issue with the server-side prepared statements.
Possible solutions:
* Use protocolVersion=2, since 7.3 does not support server side prepare..
* Modify jdbc driver to use unique prefixes for server-side
prepared statements. Build my own jar and deploy.
* Request help to solve this the 'correct' way.
I realize that this environment is not so common. All I ask is help
in making it possible.
FWIW it seems that Oracle has something similar in 11g named DRCP:
http://www.oracle.com/technology/tech/oci/pdf/oracledrcp11g.pdf
So maybe it's not such an uncommon case after all...
Thanks in advance for any assistence, pointers, tips on this matter.
On Fri, Sep 28, 2007 at 10:31:36PM -0400, Tom Lane wrote:
> Paul Lindner <lindner(at)inuus(dot)com> writes:
> > It seems that the driver should immediately deallocate the server-side
> > prepared statements it creates for cursors when it finishs fetching
> > data for that cursor.
>
> What exactly is the argument here? That no client should use anything
> more than the fraction of the FE/BE protocol that pgbouncer currently
> supports? Pardon me for not buying into it.
>
> regards, tom lane
--
Paul Lindner ||||| | | | | | | | | |
lindner(at)inuus(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Oliver Jowett | 2007-09-29 08:43:33 | Re: Prepared Statements vs. pgbouncer |
Previous Message | Tom Lane | 2007-09-29 04:54:00 | Re: Prepared Statements vs. pgbouncer |