From: | "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com> |
---|---|
To: | Adam Kavan <akavan(at)cox(dot)net> |
Cc: | Carlos Moreno <moreno(at)mochima(dot)com>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Odd behaviour -- Index scan vs. seq. scan |
Date: | 2003-09-16 12:57:07 |
Message-ID: | Pine.LNX.4.33.0309160655350.4036-100000@css120.ihs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, 15 Sep 2003, Adam Kavan wrote:
>
> >
> > explain delete from game where gameid = 1000;
> > Index Scan using game_pkey on game (cost=0.00..3.14 rows=1 width=6)
> >
> > explain delete from game where gameid < 1000;
> > Seq Scan on game (cost=0.00..4779.50 rows=200420 width=6)
> >
> > explain delete from game where gameid between 1000 and 2000;
> > Index Scan using game_pkey on game (cost=0.00..3.15 rows=1 width=6)
> >
> >
> >How's that possible? Is it purposely done like this, or
> >is it a bug? (BTW, Postgres version is 7.2.3)
>
>
> Postgres thinks that for the = line there will only be 1 row so t uses an
> index scan. Same thing for the between. However it thinks that there are
> 200420 rows below 1000 and decides a seq scan would be faster. You can run
> EXPLAIN ANALYZE to see if its guesses are correct. You can also try SET
> enable_seqscan = FALSE; to see if it is faster doing an index scan. If it
> is faster to do an index scan edit your postgres.conf file and lower the
> cost for a random tuple, etc.
Before you do that you might wanna issue this command:
alter table game alter column gameid set statistics 100;
analyze game;
and see what you get.
From | Date | Subject | |
---|---|---|---|
Next Message | Shawn Pinto | 2003-09-16 13:09:31 | Red Hat 9 Postgres |
Previous Message | Christopher Browne | 2003-09-16 12:53:39 | Re: CONCAT function |