From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Pavel Tavoda <pavel(dot)tavoda(at)gmail(dot)com> |
Cc: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, "pgsql-bugs(at)postgresql(dot)org" <pgsql-bugs(at)postgresql(dot)org> |
Subject: | Re: BUG #14729: Between operator is slow when same value used for low and high margin |
Date: | 2017-07-04 05:15:09 |
Message-ID: | 15012.1499145309@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
Pavel Tavoda <pavel(dot)tavoda(at)gmail(dot)com> writes:
> -> Seq Scan on contractportfolio cp (cost=0.00..109171.90 rows=1 width=8) (actual time=0.009..486.918 rows=20426 loops=1)
> Filter: ((validfor >= '2017-05-31'::date) AND (validfor <= '2017-05-31'::date))
> STUNNING!!!!
Yup, it's certainly that factor-of-20K rowcount misestimate that is
killing you here. Given that this estimate isn't too bad:
> -> Seq Scan on contractportfolio cp (cost=0.00..101496.91 rows=20197 width=8) (actual time=0.009..463.063 rows=20426 loops=1)
> Filter: (validfor = '2017-05-31'::date)
I do not think your problem is one of out-of-date statistics. Rather,
the issue is just that we're bad at narrow range estimates. I did
some work on that today[1] but it won't show up in a released PG version
before next year. In the meantime, I believe that the existing code would
arrive at a plausible answer if the value being checked were present in
the column's pg_stats.most_common_vals list. Maybe you could fix this
by increasing the statistics target for the column or the whole table
(and re-analyzing it, of course).
regards, tom lane
[1] https://www.postgresql.org/message-id/12232.1499140410%40sss.pgh.pa.us
From | Date | Subject | |
---|---|---|---|
Next Message | Pavel Tavoda | 2017-07-04 10:59:46 | Re: BUG #14729: Between operator is slow when same value used for low and high margin |
Previous Message | Amit Langote | 2017-07-04 04:45:15 | Re: BUG #14732: partitioned table cann't alter set parallel_workers? |