From: | Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au> |
---|---|
To: | Christopher Browne <cbbrowne(at)acm(dot)org> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Cost-based optimizers |
Date: | 2005-12-13 04:44:50 |
Message-ID: | 439E51C2.7060703@familyhealth.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> I saw it in print; the only thing that seemed interesting about it was
> the recommendation that query optimization be biased towards the
> notion of "stable plans," query plans that may not be the most
> "aggressively fast," but which don't fall apart into hideous
> performance if the estimates are a little bit off.
And the answer is interesting as well:
"I think we have to approach it in two ways. One is that you have to be
able to execute good plans, and during the execution of a plan you want
to notice when the actual data is deviating dramatically from what you
expected. If you expected five rows and you’ve got a million, chances
are your plan is not going to do well because you chose it based on the
assumption of five. Thus, being able to correct mid-course is an area of
enhancement for query optimizers that IBM is pursuing."
Hmmm dynamic re-planning!
Chris
From | Date | Subject | |
---|---|---|---|
Next Message | Andreas Pflug | 2005-12-13 05:07:08 | Re: pg_relation_size locking |
Previous Message | Luke Lonergan | 2005-12-13 04:43:07 | Re: Which qsort is used |