From: | Martijn van Oosterhout <kleptog(at)svana(dot)org> |
---|---|
To: | Felipe Schnack <felipes(at)ritterdosreis(dot)br> |
Cc: | terry(at)ashtonwoodshomes(dot)com, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: LIMIT clause optimization |
Date: | 2003-01-07 13:22:47 |
Message-ID: | 20030107132247.GC14655@svana.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, Jan 07, 2003 at 11:14:46AM -0200, Felipe Schnack wrote:
> So, generally selecting all rows from a table an fetching only the
> first one is probably faster than limiting the query to its first row?
>
> On Tue, 2003-01-07 at 11:14, terry(at)ashtonwoodshomes(dot)com wrote:
> > Further, I think if your query has an order by clause then the whole query
> > is executed, sorted, then all but the limit'd rows are truncated. Hence
> > there is no performance improvement.
> >
> > A very VERY smart database engine could perhaps in some cases use an index
> > to determine in advance the sort and get the rows in the correct order, and
> > hence stop when the limit was reached. But that would be a rare case at
> > best, and I doubt anyone has gone to the brain damage of implementing such
> > complexity considering the very limited payback.
Well, I guess that makes postgresql a very VERY smart database engine. If
you give no limit, postgresql will base it's planning on retreiving all
rows. If you specify a LIMIT, it will plan on only calculating those rows.
Play with EXPLAIN and LIMIT and very large tables with indexes. It's fairly
easy to demonstrate.
--
Martijn van Oosterhout <kleptog(at)svana(dot)org> http://svana.org/kleptog/
> Support bacteria! They're the only culture some people have.
From | Date | Subject | |
---|---|---|---|
Next Message | Justin Clift | 2003-01-07 14:21:37 | Re: PostgreSQL on e10k |
Previous Message | Bruno Wolff III | 2003-01-07 13:16:58 | Re: LIMIT clause optimization |