From: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
---|---|
To: | pgsql-bugs <pgsql-bugs(at)postgresql(dot)org> |
Subject: | Introducing floating point cast into filter drastically changes row estimate |
Date: | 2012-10-24 15:06:51 |
Message-ID: | CAHyXU0zLUJ1jPUW5rJ8c269m=sGFCQd1AuSGudDNFSJ5DFk0QA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
I was chasing down a query that ran fine in 8.1 but had an near
infinite runtime in 9.2. It turned out to be from a bad filter
estimate that is surprisingly simple to reproduce:
postgres=# create table foo(i int);
CREATE TABLE
postgres=# insert into foo select 1000 + (v/200) from generate_series(1,5000) v;
INSERT 0 5000
postgres=# ANALYZE foo;
ANALYZE
The following query runs fine: it estimates the returned rows pretty wel:
postgres=# explain analyze select * from foo where i > 100 and i < 10000;
QUERY PLAN
----------------------------------------------------------------------------------------------------
Seq Scan on foo (cost=0.00..98.00 rows=4999 width=4) (actual
time=0.018..1.071 rows=5000 loops=1)
Filter: ((i > 100) AND (i < 10000))
Total runtime: 1.425 ms
...but if you introduce a floating point cast, it drastically changes
the returned rows (why?):
postgres=# explain analyze select * from foo where i::float8 > 100
and i::float8 < 10000;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------
Seq Scan on foo (cost=0.00..123.00 rows=25 width=4) (actual
time=0.022..1.566 rows=5000 loops=1)
Filter: (((i)::double precision > 100::double precision) AND
((i)::double precision < 10000::double precision))
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2012-10-24 15:33:58 | Re: Introducing floating point cast into filter drastically changes row estimate |
Previous Message | Merlin Moncure | 2012-10-24 13:39:14 | Re: BUG #7620: array_to_json doesn't support heterogeneous arrays |