From: | Stephan Szabo <sszabo(at)megazone23(dot)bigpanda(dot)com> |
---|---|
To: | Kacper Nowicki <kacper(dot)nowicki(at)wp(dot)pl> |
Cc: | PostgreSQL <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: index on large table |
Date: | 2002-03-12 22:33:21 |
Message-ID: | 20020312143030.M70457-100000@megazone23.bigpanda.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, 12 Mar 2002, Kacper Nowicki wrote:
> so, let's see further down the table, the offset is shifted to 1M, we still
> want to see just 10 entries.
>
> "explain select * from events order by oid limit 10 offset 1000000"
> NOTICE: QUERY PLAN:
> Limit (cost=424863.54..424863.54 rows=10 width=130)
> -> Sort (cost=424863.54..424863.54 rows=1025245 width=130)
> -> Seq Scan on events (cost=0.00..35645.45 rows=1025245 width=130)
>
> Bummer. This is very slow again, sequential scan again. Why the index is
> not used for this query? Use of index would make it very fast!
What gets shown for explain with set enable_seqscan=off? If you're using
7.2, try explain analyze both ways as well.
The row grabbing with the index would be slower than the sequence scan,
but most of the cost seems to be going into the sort. Another thing to
try would be raising sort_mem I guess.
From | Date | Subject | |
---|---|---|---|
Next Message | Lec | 2002-03-12 23:12:15 | Using index for substring search |
Previous Message | Stephan Szabo | 2002-03-12 22:24:18 | Re: Tuning 7.2? Different than 7.1.3? |