From: | Matthias <nitrogenycs(at)googlemail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Gist cost estimates |
Date: | 2012-01-31 17:04:56 |
Message-ID: | CACECd8j_-RsT09__yYmmi39meseAh5UVFWr4LUUUYsLwRTU4dA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello,
I've noticed the new range data types in 9.2dev. I'm really looking
forward to use them, so I built postgres 9.2dev on windows to try.
While testing I noticed one thing. I have a simple test table with 1
million rows. There's a column called valid_range (of type int4range)
which is GiST indexed. Now when I do a query like
select * from mytable where valid_range && int4range(100,200)
it will use the created gist index. But it will completely fail with
the cost estimation. For whatever reason it always assumes 5104 rows
will be returned, while in reality more than 300k rows are returned.
If I change the query to look like
select * from mytable where valid_range && int4range(null,null)
it will still estimate 5104 rows to be returned (in reality it's 1M
rows -- the whole table). This leads to grossly inefficient query
plans.
Curiously I have the same problem with postgres' cube data type
(tested on 9.1 and which also estimates exactly 5104 rows). And
postgis indexes have a similar (though maybe unrelated) problem.
Do you have any explanation for these grossly wrong cost estimates?
Are they unimplemented? What can I do to debug this further?
Thank you,
-Matthias
P.S.: I've already increased the statistics collection size (done by
vacuum analyze) to no avail
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2012-01-31 17:21:37 | Re: Intermittent occurrence of ERROR: could not open relation |
Previous Message | Nykolyn, Andy (AS) | 2012-01-31 16:08:24 | Intermittent occurrence of ERROR: could not open relation |