From: | Bob Dusek <redusek(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: performance config help |
Date: | 2010-01-13 20:10:04 |
Message-ID: | 61039b861001131210j673f1200r16f60f26f5a22265@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
FYI - We have implemented a number of changes...
a) some query and application optimizations
b) connection pool (on the cheap: set max number of clients on
Postgres server and created a blocking wrapper to pg_pconnect that
will block until it gets a connection)
c) moved the application server to a separate box
And, we pretty much doubled our capacity... from approx 40 "requests"
per second to approx 80.
The problem with our "cheap" connection pool is that the persistent
connections don't seem to be available immediately after they're
released by the previous process. pg_close doesn't seem to help the
situation. We understand that pg_close doesn't really close a
persistent connection, but we were hoping that it would cleanly
release it for another client to use. Curious.
We've also tried third-party connection pools and they don't seem to
be real fast.
Thanks for all of your input. We really appreciate it.
Bob
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2010-01-13 20:43:29 | Re: performance config help |
Previous Message | Eduardo Piombino | 2010-01-13 19:13:20 | Re: a heavy duty operation on an "unused" table kills my server |