From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | pgsql-committers(at)postgresql(dot)org |
Subject: | pgsql: Omit null rows when setting the threshold for what's a most-comm |
Date: | 2016-04-01 21:03:37 |
Message-ID: | E1am6ED-0001Pv-0W@gemulon.postgresql.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-committers |
Omit null rows when setting the threshold for what's a most-common value.
As with the previous patch, large numbers of null rows could skew this
calculation unfavorably, causing us to discard values that have a
legitimate claim to be MCVs, since our definition of MCV is that it's
most common among the non-null population of the column. Hence, make
the numerator of avgcount be the number of non-null sample values not
the number of sample rows; likewise for maxmincount in the
compute_scalar_stats variant.
Also, make the denominator be the number of distinct values actually
observed in the sample, rather than reversing it back out of the computed
stadistinct. This avoids depending on the accuracy of the Haas-Stokes
approximation, and really it's what we want anyway; the threshold should
depend only on what we see in the sample, not on what we extrapolate
about the contents of the whole column.
Alex Shulgin, reviewed by Tomas Vondra and myself
Branch
------
master
Details
-------
http://git.postgresql.org/pg/commitdiff/3d3bf62f30200500637b24fdb7b992a99f9704c3
Modified Files
--------------
src/backend/commands/analyze.c | 20 +++++++++-----------
1 file changed, 9 insertions(+), 11 deletions(-)
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2016-04-01 21:36:35 | pgsql: test_slot_timelines: Fix alternate expected output |
Previous Message | Alvaro Herrera | 2016-04-01 20:12:33 | pgsql: pgbench: Remove unused parameter |