From: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
---|---|
To: | Ivan Voras <ivoras(at)freebsd(dot)org> |
Cc: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, postgres performance list <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: ORDER BY, LIMIT and indexes |
Date: | 2013-08-06 15:47:01 |
Message-ID: | CAGTBQpbmSsPARXhhD07_n9Zjw4O=TROORVQ0GKje8AmFkP9Oaw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Aug 6, 2013 at 7:46 AM, Ivan Voras <ivoras(at)freebsd(dot)org> wrote:
> ivoras=# explain analyze select * from lt where id > 900000 limit 10;
> QUERY PLAN
> ----------------------------------------------------------------------------------------------------------------
> Limit (cost=0.00..1.71 rows=10 width=9) (actual
> time=142.669..142.680 rows=10 loops=1)
> -> Seq Scan on lt (cost=0.00..17402.00 rows=101630 width=9)
> (actual time=142.665..142.672 rows=10 loops=1)
> Filter: (id > 900000)
> Total runtime: 142.735 ms
> (4 rows)
I think the problem lies in assuming the sequential scan will have 0
startup cost, which is not the case here (it will have to scan up to
the first page with an id > 900000).
If that were to be fixed, the index scan would be chosen instead.
I don't see a way to fix it without querying the index, which could be
a costly operation... except with the newly proposed minmax indexes. I
guess here's another case where those indexes would be useful.
From | Date | Subject | |
---|---|---|---|
Next Message | Nikolas Everett | 2013-08-06 16:09:44 | Re: ORDER BY, LIMIT and indexes |
Previous Message | Ivan Voras | 2013-08-06 10:46:29 | Re: ORDER BY, LIMIT and indexes |