From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Tim Dudgeon <tdudgeon(dot)ml(at)gmail(dot)com> |
Cc: | pgsql-sql(at)postgresql(dot)org |
Subject: | Re: slow sub-query problem |
Date: | 2014-11-17 20:10:46 |
Message-ID: | 23343.1416255046@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
Tim Dudgeon <tdudgeon(dot)ml(at)gmail(dot)com> writes:
> I'm having problems optimising a query that's very slow due to a sub-query.
I think it might get better if you could fix this misestimate:
> " -> Bitmap Index Scan on idx_sp_property_id
> (cost=0.00..33.90 rows=1146 width=0) (actual time=51.656..51.656 rows=811892 loops=366)"
> " Index Cond: (property_id = ANY ('{1,643413,1106201}'::integer[]))"
1146 estimated vs 811892 actual is pretty bad, and it doesn't seem like
this is a very hard case to estimate. Are the stats for structure_props
up to date? Maybe you need to increase the statistics target for the
property_id column.
Another component of the bad plan choice is this misestimate:
> " -> HashAggregate (cost=1091.73..1091.75 rows=2 width=4) (actual time=2.829..3.212 rows=366 loops=1)"
> " Group Key: structure_props_1.structure_id"
but it might be harder to do anything about that one, since the result
depends on the property_id being probed; without cross-column statistics
it may be impossible to do much better.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tim Dudgeon | 2014-11-18 08:53:38 | Re: slow sub-query problem |
Previous Message | David Johnston | 2014-11-17 19:19:49 | Re: slow sub-query problem |