From: | Oleg Bartunov <oleg(at)sai(dot)msu(dot)su> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Pgsql Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Very ineffective plan with merge join |
Date: | 2010-04-16 09:41:05 |
Message-ID: | Pine.LNX.4.64.1004161337390.7097@sn.sai.msu.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, 15 Apr 2010, Tom Lane wrote:
> Oleg Bartunov <oleg(at)sai(dot)msu(dot)su> writes:
>> below is an example of interesting query and two plans - the bad plan, which
>> uses merge join and big sorting, took 216 sec, and good plan with merge join disabled took
>> 8 sec.
>
> The "good" plan seems to be fast mainly because of heavily cached inner
> indexscans. If that's the normal operating state for this database, you
> should try reducing random_page_cost.
Hmm, reducing random_page_cost to 3 helps, now all plans are the same.
>
> Also, as Pavel noted, the sub-join size estimates aren't very good, and
> those overestimates are discouraging it from using inner-indexscan
> nestloops. I'm not sure how much it would help to increase the
> statistics targets, but that would be worth trying.
Yes, setting statistics to 1000 helped for that paticular query (reduced by me),
but full query still chooses wrong plan with merge join. As you say before,
random_page_cost=3 helped.
I'm wondering if postgres could recognize such case (heavily cached inner
indexscans).
Regards,
Oleg
_____________________________________________________________
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru)
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg(at)sai(dot)msu(dot)su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83
From | Date | Subject | |
---|---|---|---|
Next Message | Bjorn Munch | 2010-04-16 09:42:10 | Re: solaris sparc 64bit binary release |
Previous Message | Oleg Bartunov | 2010-04-16 09:25:46 | Re: Very ineffective plan with merge join |