From: | "Deshpande, Yogesh Sadashiv (STSD-Openview)" <yogesh-sadashiv(dot)deshpande(at)hp(dot)com> |
---|---|
To: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Postgre Performance |
Date: | 2011-10-18 13:57:08 |
Message-ID: | D1845D432CC3A64F8653DFEFB3732D5017B325BD@G4W3296.americas.hpqcorp.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello ,
We have a setup where in there are around 100 process running in parallel every 5 minutes and each one of them opens a connection to database. We are observing that for each connection , postgre also created on sub processes. We have set max_connection to 100. So the number of sub process in the system is close to 200 every 5 minutes. And because of this we are seeing very high CPU usage. We need following information
1. Is there any configuration we do that would pool the connection request rather than coming out with connection limit exceed.
2. Is there any configuration we do that would limit the sub process to some value say 50 and any request for connection would get queued.
Basically we wanted to limit the number of processes so that client code doesn't have to retry for unavailability for connection or sub processes , but postgre takes care of queuing?
Thanks
Yogesh
From | Date | Subject | |
---|---|---|---|
Next Message | Frédéric Rejol | 2011-10-18 14:27:16 | pg_dump not including custom CAST based on table types |
Previous Message | salah jubeh | 2011-10-18 13:56:48 | Re: many sql file and one transaction |