From: | Nathan Boley <npboley(at)gmail(dot)com> |
---|---|
To: | Florian Pflug <fgp(at)phlo(dot)org> |
Cc: | Tomas Vondra <tv(at)fuzzy(dot)cz>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: estimating # of distinct values |
Date: | 2011-01-20 01:41:51 |
Message-ID: | AANLkTi=nNLUkMFQKitsSpLTWh+UoT742GcmKtAdsC97U@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
>> If you think about it, it's a bit ridiculous to look at the whole table
>> *just* to "estimate" ndistinct - if we go that far why dont we just
>> store the full distribution and be done with it?
>
> - the best you could do is to average the
> individual probabilities which gives ... well, 1/ndistinct.
>
That is certainly *not* the best you could do in every case. The mean
is only the best estimate in L2, which is definitely not the case
here.
Consider a table with 10K values, 9,990 of which are 1 and the rest of
which are 2, 3, ..., 10, versus a table that has the same 10 distinct
values evenly distributed. For a simple equality query, in the first
case, a bitmap scan might be best. In the second case, a sequential
scan would always be best.
This is precisely the point I was trying to make in my email: the loss
function is very complicated. Improving the ndistinct estimator could
significantly improve the estimates of ndistinct ( in the quadratic
loss sense ) while only marginally improving the plans.
-Nathan
From | Date | Subject | |
---|---|---|---|
Next Message | Fujii Masao | 2011-01-20 01:42:20 | Re: pg_basebackup for streaming base backups |
Previous Message | Kevin Grittner | 2011-01-20 01:34:00 | Re: SSI and Hot Standby |