From: | Francisco Reyes <fran(at)reyes(dot)somos(dot)net> |
---|---|
To: | rudy <rudy(at)heymax(dot)com> |
Cc: | <pgsql-novice(at)postgresql(dot)org> |
Subject: | Re: explain plan |
Date: | 2001-02-02 05:07:24 |
Message-ID: | Pine.BSF.4.32.0102020002431.1877-100000@zoraida.reyes.somos.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
On Tue, 30 Jan 2001, rudy wrote:
> skyy=# vacuum analyze article;
> VACUUM
> skyy=# explain select id_article from article where id_article = 21;
> NOTICE: QUERY PLAN:
>
> Seq Scan on article (cost=0.00..1.61 rows=1 width=8)
>
> EXPLAIN
> skyy=#
>
> This table has 20,000 records. What am I doing wrong? Why doesn't it use
> the Index I created? Is there something I need to enable, why wouldn't
> it choose an index over a seq scan with more than 20,000 rows to scan?
I am new to PostgreSQL, but have been doing databases for a while so I am
going to give feedback based on previous experiences with other
optimizers.
Depending on how big each row is the optimizer may decide that the
overhead of going to the index may not be worth it compared to what it
would "cost" just reading the whole file.
You also need to take into account the cardinality of the field in
question. (familiar with the term?)
For example if when you did vacuum analyze the database notices that the
field in question has a high number of different values and it believes
that your request would return a large number of them, then going to the
index may indeed be slower.
How many rows does the query return?
From | Date | Subject | |
---|---|---|---|
Next Message | Francisco Reyes | 2001-02-02 05:13:39 | Re: explain plan |
Previous Message | Kwan Lai Sum | 2001-02-02 03:24:13 | syslog.conf |