Am 15.11.2011 01:42, schrieb Cody Caughlan:
> We have anywhere from 60-80 background worker processes connecting to
> Postgres, performing a short task and then disconnecting. The lifetime
> of these tasks averages 1-3 seconds.
>
> I know that there is some connection overhead to Postgres, but I dont
> know what would be the best way to measure this overheard and/or to
> determine if its currently an issue at all.
>
> If there is a substantial overheard I would think that employing a
> connection pool like pgbouncer to keep a static list of these
> connections and then dole them out to the transient workers on demand.
>
> So the overall cumulative number of connections wouldnt change, I
> would just attempt to alleviate the setup/teardown of them so quickly.
>
> Is this something that I should look into or is it not much of an
> issue? Whats the best way to determine if I could benefit from using a
> connection pool?
>
> Thanks.
>
I had a case where a pooler (in this case pgpool) resulted in a 140%
application improvement - so - yes, it is probably a win to use a
pooling solution.