From: | Stephan Szabo <sszabo(at)megazone(dot)bigpanda(dot)com> |
---|---|
To: | Michael McFarland <sidlonDoesntLikeSpam(at)yahoo(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: adding 'limit' leads to very slow query |
Date: | 2005-03-08 07:03:43 |
Message-ID: | 20050307230123.L97147@megazone.bigpanda.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, 7 Mar 2005, Michael McFarland wrote:
> I'm trying to understand why a particular query is slow, and it seems
> like the optimizer is choosing a strange plan. See this summary:
>
>
> * I have a large table, with an index on the primary key 'id' and on a
> field 'foo'.
> > select count(*) from foo;
> 1,000,000
> > select count(*) from foo where bar = 41;
> 7
>
> * This query happens very quickly.
> > explain select * from foo where barId = 412 order by id desc;
> Sort ()
> Sort key= id
> -> Index scan using bar_index on foo ()
> Index cond: barId = 412
>
> But this query takes forever
>
> > explain select * from foo where barId = 412 order by id desc limit 25;
> Limit ()
> -> Index scan backward using primarykey_index
> Filter: barID = 412
You didn't show the row estimates, but I'd guess that it's expecting
either that ther are more rows that match barId=412 than there actually
are (which may be solvable by raising the statistics target on the column
and re-analyzing) such that going backwards on id in order to make 25
matching rows isn't a bad plan or that barId and id are correlated which
is unfortunately not going to be recognized right now.
From | Date | Subject | |
---|---|---|---|
Next Message | tsarevich | 2005-03-08 07:47:37 | Re: Tuning, configuration for 7.3.5 on a Sun E4500 |
Previous Message | John A Meinel | 2005-03-08 04:46:37 | Re: adding 'limit' leads to very slow query |