From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Josh Berkus <josh(at)agliodbs(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Greg Stark <gsstark(at)mit(dot)edu>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Idea about estimating selectivity for single-column expressions |
Date: | 2009-08-19 19:04:29 |
Message-ID: | 603c8f070908191204t3cbe136ge0ec0530707a1337@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Aug 19, 2009 at 3:00 PM, Josh Berkus<josh(at)agliodbs(dot)com> wrote:
> Tom, Greg, Robert,
>
> Here's my suggestion:
>
> 1. First, estimate the cost of the node with a very pessimistic (50%?)
> selectivity for the calculation.
There is no such thing as a pessimistic selectivity estimation. Right
now a lot of things use nested loops when they should hash, because we
use 0.5%. If we changed it to 50%, then we'd have the opposite
problem (and maybe some merge joins where we should have hashing).
Unfortunately, there is no substitute for accurate estimates. Of
course if the executor had the ability to switch from a nest loop to a
hash join in mid query it would help a great deal, but that's a much
bigger project, I think.
...Robert
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2009-08-19 19:06:09 | Re: We should Axe /contrib/start-scripts |
Previous Message | Tom Lane | 2009-08-19 19:01:30 | Re: We should Axe /contrib/start-scripts |