| From: | Glyn Astill <glynastill(at)yahoo(dot)co(dot)uk> |
|---|---|
| To: | Adam Rich <adam(dot)r(at)sbcglobal(dot)net>, rihad <rihad(at)mail(dot)ru> |
| Cc: | pgsql-general(at)postgresql(dot)org |
| Subject: | Re: index speed and failed expectations? |
| Date: | 2008-08-04 14:33:40 |
| Message-ID: | 781023.57753.qm@web25808.mail.ukl.yahoo.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
> >
> > However, if you limit the number of rows enough, you
> might force it
> > to use an index:
> >
> > select * from stats order by start_time limit 1000;
> >
>
> Thanks! Since LIMIT/OFFSET is the typical usage pattern for
> a paginated
> data set accessed from the Web (which is my case), it
> immediately
> becomes a non-issue.
>
We do a lot of queries with order by limit n, and from my experience setting enable_sort to off on the database also makes a massive difference.
http://www.postgresql.org/docs/8.3/static/indexes-ordering.html
__________________________________________________________
Not happy with your email address?.
Get the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Kedar | 2008-08-04 15:10:50 | Re: How to remove duplicate lines but save one of the lines? |
| Previous Message | Tom Lane | 2008-08-04 14:20:13 | Re: index speed and failed expectations? |