From: | Alex Shulgin <alex(dot)shulgin(at)gmail(dot)com> |
---|---|
To: | "Shulgin, Oleksandr" <oleksandr(dot)shulgin(at)zalando(dot)de> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, David Steele <david(at)pgmasters(dot)net> |
Subject: | Re: More stable query plans via more predictable column statistics |
Date: | 2016-04-03 04:59:16 |
Message-ID: | CAM-UEKR6EUKT7E3JXqC5Txu4w1qxRbw+mEFAS0Mx4R9vP8oejw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, Apr 3, 2016 at 3:43 AM, Alex Shulgin <alex(dot)shulgin(at)gmail(dot)com> wrote:
>
> I'm not sure yet about the 1% rule for the last value, but would also love
> to see if we can avoid the arbitrary limit here. What happens with a last
> value which is less than 1% popular in the current code anyway?
>
Tom,
Now that I think about it, I don't really believe this arbitrary heuristic
is any good either, sorry. What if you have a value that is just a bit
under 1% popular, but is being used in 50% of your queries in WHERE clause
with equality comparison? Without this value in the MCV list the planner
will likely use SeqScan instead of an IndexScan that might be more
appropriate here. I think we are much better off if we don't touch this
aspect of the current code.
What was your motivation to introduce some limit at the bottom anyway? If
that was to prevent accidental division by zero, then an explicit check on
denominator not being 0 seems to me like a better safeguard than this.
Regards.
--
Alex
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2016-04-03 05:08:54 | Re: Re: [COMMITTERS] pgsql: Refer to a TOKEN_USER payload as a "token user, " not as a "user |
Previous Message | Alex Shulgin | 2016-04-03 04:46:06 | Re: Add schema-qualified relnames in constraint error messages. |