From: | Jeremy Schneider <schneider(at)ardentperf(dot)com> |
---|---|
To: | Stephan Schmidt <schmidt(at)dltmail(dot)de>, "pgsql-performance(at)lists(dot)postgresql(dot)org" <pgsql-performance(at)lists(dot)postgresql(dot)org> |
Subject: | Re: impact of auto explain on overall performance |
Date: | 2019-03-14 19:58:18 |
Message-ID: | 50a93298-5e11-f625-9a78-57d02ea80404@ardentperf.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 3/14/19 00:29, Stephan Schmidt wrote:
> i’m currently working on a high Performance Database and want to make
> sure that whenever there are slow queries during regular operations i’ve
> got all Information about the query in my logs. So auto_explain come to
> mind, but the documentation explicitly states that it Comes at a cost.
> My Question is, how big is the latency added by auto_explain in
> percentage or ms ?
One thought - what if the problem query is a 4ms query that just went to
6ms but it's executed millions of times per second? That would create a
150% increase to the load on the system.
The approach I've had the most success with is to combine active session
sampling (from pg_stat_activity) with pg_stat_statements (ideally with
periodic snapshots) to identify problematic SQL statements, then use
explain analyze after you've identified them.
There are a handful of extensions on the internet that can do active
session sampling for you, and I've seen a few scripts that can be put
into a scheduler to capture snapshots of stats tables.
Maybe something to consider in addition to the auto_explain stuff.
-Jeremy
From | Date | Subject | |
---|---|---|---|
Next Message | M Tarkeshwar Rao | 2019-03-15 05:19:48 | Facing issue in using special characters |
Previous Message | Jeff Janes | 2019-03-14 17:50:07 | Re: impact of auto explain on overall performance |