From: | "Albe Laurenz" <laurenz(dot)albe(at)wien(dot)gv(dot)at> |
---|---|
To: | "AndyG *EXTERN*" <andy(dot)gumbrecht(at)orprovision(dot)com>, <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Slow query, where am I going wrong? |
Date: | 2012-10-31 13:29:44 |
Message-ID: | D960CB61B694CF459DCFB4B0128514C208A4E327@exadv11.host.magwien.gv.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> But why? Is there a way to force the planner into this?
I don't know enough about the planner to answer the "why",
but the root of the problem seems to be the mis-estimate
for the join between test_result and recipe_version
(1348 instead of 21983 rows).
That makes the planner think that a nested loop join
would be cheaper, but it really is not.
I had hoped that improving statistics would improve that
estimate.
The only way to force the planner to do it that way is
to set enable_nestloop=off, but only for that one query.
And even that is a bad idea, because for different
constant values or when the table data change, a nested
loop join might actually be the best choice.
I don't know how to solve that problem.
Yours,
Laurenz Albe
From | Date | Subject | |
---|---|---|---|
Next Message | AndyG | 2012-10-31 14:59:12 | Re: Slow query, where am I going wrong? |
Previous Message | Shaun Thomas | 2012-10-31 13:04:55 | Re: Seq scan on big table, episode 2 |