From: | Catalin Iacob <iacobcatalin(at)gmail(dot)com> |
---|---|
To: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
Cc: | Marko Kreen <markokr(at)gmail(dot)com>, Merlin Moncure <mmoncure(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: How to keep queries low latency as concurrency increases |
Date: | 2012-11-25 16:30:04 |
Message-ID: | CAHg_5grrAQKEaP+ZVHBOejX_kU90WvPOshd0v28zwxiBiPeEGQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Thanks to everybody for their help, sorry for not getting back earlier
but available time shrunk very quickly as the deadline approached and
afterwards this kind of slipped off my mind.
On Tue, Nov 6, 2012 at 12:31 AM, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
> It still has something to contribute if connections are made and
> broken too often (pgbench -C type workload), as seems to be the case
> here.
Django opens a connection for every request and closes it at the end
of the request. As far as I know you can't override this, they tell
you that if connection overhead is too big you should use a connection
pool like pgbouncer. You still get latency by doing the connection and
some overhead in pgbouncer but you skip creating a Postgres process to
handle the new connection. And indeed, after starting to use pgbouncer
we could handle more concurrent users.
> If he can get an application-side pooler (or perhaps just a change in
> configuration) such that the connections are not made and broken so
> often, then removing pgbouncer from the loop would probably be a win.
Django doesn't offer application-side poolers, they tell you to use
pgbouncer (see above). So pgbouncer is a net gain since it avoids
Postgres process spawning overhead.
Following recommendations in this thread, I replaced the global
pgbouncer on the DB machine by one pgbouncer for each webserver
machine and that helped. I didn't run the synthetic ab test in my
initial message on the new configuration but for our more realistic
tests, page response times did shorten. The system is in production
now so it's harder to run the tests again to see exactly how much it
helped but it definitely did.
So it seems we're just doing too many connections and too many
queries. Each page view from a user translates to multiple requests to
the application server and each of those translates to a connection
and at least a few queries (which are done in middleware and therefore
happen for each and every query). One pgbouncer can handle lots of
concurrent idle connections and lots of queries/second but our 9000
queries/second to seem push it too much. The longer term solution for
us would probably be to do less connections (by doing less Django
requests for a page) and less queries, before our deadline we were
just searching for a short term solution to handle an expected traffic
spike.
Cheers,
Catalin Iacob
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2012-11-26 07:46:33 | Re: How to keep queries low latency as concurrency increases |
Previous Message | Bruce Momjian | 2012-11-24 00:21:49 | Re: SOLVED - RE: Poor performance using CTE |