From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | rguha(at)indiana(dot)edu |
Cc: | Adam Rich <adam(dot)r(at)sbcglobal(dot)net>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: suggestions on improving a query |
Date: | 2007-02-14 15:55:48 |
Message-ID: | 23874.1171468548@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Rajarshi Guha <rguha(at)indiana(dot)edu> writes:
> Clearly a big improvement in performance.
Huh? It looks like exactly the same plan as before. Any improvement
you're seeing must be coming from cache effects.
> It looks like theres a big mismatch on the expected and observed costs and times.
Well, in the first place the estimated costs are not measured in
milliseconds, and in the second place the estimated cost and rowcount
are for execution of the plan node to completion, which is not happening
here because of the Limit --- we'll stop the plan as soon as the top
join node has produced 10 rows. In fact I'd say the whole problem here
is that the planner is being too optimistic about the benefits of a
fast-start plan. For whatever reason (most likely, an unfavorable
correlation between dock.target and dockscore_plp.total), the desired
rows aren't uniformly scattered in the output of the join, and so it's
taking longer than expected to find 10 of them.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-02-14 16:00:06 | Re: suggestions on improving a query |
Previous Message | Laura McCord | 2007-02-14 15:44:48 | Re: Having a problem with my stored procedure |