From: | "Kristian Eide" <kreide(at)online(dot)no> |
---|---|
To: | <pgsql-sql(at)postgresql(dot)org> |
Subject: | Why is my index not used |
Date: | 2002-02-18 18:54:42 |
Message-ID: | 00bd01c1b8ad$b9d03e90$6b97f181@speed |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
I have a table, currently at about 150.000 rows, with a btree index on a
field named 'misscnt'. ~138k of the rows have this fields set as null, ~12k
as 0, ~350 as 1, ~200 as 2 and ~150 as 3 (these are the only values used).
Still, I get the following:
# explain select * from cam where misscnt>=1;
NOTICE: QUERY PLAN:
Seq Scan on cam (cost=0.00..3609.59 rows=46896 width=66)
Why do postgre think it will get over 46k rows from this query? If I try a
VACUUM ANALYZE on the table, I get:
# explain select * from cam where misscnt>=1;
NOTICE: QUERY PLAN:
Seq Scan on cam (cost=0.00..3874.01 rows=50347 width=66)
So now it actually thinks it will get _more_ rows:
# select count(*) from cam where misscnt>=1;
count
-------
692
If I use "set enable_seqscan=false;" and try "select * from cam where
misscnt>=1;" the index is used, and the query executes quite a bit faster.
However, shouldn't Postgre be able to do this automatically?
This is using PostgreSQL 7.1.2.
Thanks.
---
Kristian Eide
From | Date | Subject | |
---|---|---|---|
Next Message | Irv Shapiro | 2002-02-18 19:02:10 | Casting pairs of floating point variables as a point in plpgsql |
Previous Message | Stephan Szabo | 2002-02-18 18:24:16 | Re: Need to change the size of a field |