Re: SELECT slows down on sixth execution

From: Jonathan Rogers <jrogers(at)socialserve(dot)com>
To: Albe Laurenz <laurenz(dot)albe(at)wien(dot)gv(dot)at>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: SELECT slows down on sixth execution
Date: 2015-10-17 02:14:37
Message-ID: 5621AF0D.5080203@socialserve.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 10/16/2015 08:37 AM, Albe Laurenz wrote:
> Jonathan Rogers wrote:
>>> Look at the EXPLAIN ANALYZE output for both the custom plan (one of the
>>> first five executions) and the generic plan (the one used from the sixth
>>> time on) and see if you can find and fix the cause for the misestimate.
>>
>> Yes, I have been looking at both plans and can see where they diverge.
>> How could I go about figuring out why Postgres fails to see the large
>> difference in plan execution time? I use exactly the same parameters
>> every time I execute the prepared statement, so how would Postgres come
>> to think that those are not the norm?
>
> PostgreSQL does not consider the actual query execution time, it only
> compares its estimates for there general and the custom plan.
> Also, it does not keep track of the parameter values you supply,
> only of the average custom plan query cost estimate.

OK, that makes more sense then. It's somewhat tedious for the purpose of
testing to execute a prepared statement six times to see the plan which
needs to be optimized. Unfortunately, there doesn't seem to be any way
to force use of a generic plan in SQL based on Pavel Stehule's reply.

>
> The problem is either that the planner underestimates the cost of
> the generic plan or overestimates the cost of the custom plans.
>
> If you look at the EXPLAIN ANALYZE outputs (probably with
> http://explain.depesz.com ), are there any row count estimates that
> differ significantly from reality?

Now that I've read the help about "rows x" to understand what it means,
I can see that while both plans underestimate returned rows, the generic
one underestimates them by a much larger factor. In this case, the
solution is to avoid preparing the query to ensure a custom plan is used
every time.

Since the planner is significantly underestimating row counts even when
making custom plans, I will continue to try to improve the planner's
information. My default_statistics_target is currently 500. I suppose I
should experiment with increasing it for certain columns.

Thanks for the pointers.

--
Jonathan Rogers
Socialserve.com by Emphasys Software
jrogers(at)emphasys-software(dot)com

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Jonathan Rogers 2015-10-17 02:29:56 Re: SELECT slows down on sixth execution
Previous Message Tom Lane 2015-10-16 23:02:08 Re: query partitioned table is very slow