From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Tyler Hains <thains(at)profitpointinc(dot)com> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Query Optimizer makes a poor choice |
Date: | 2011-11-29 20:34:14 |
Message-ID: | CAOR=d=1_=UJ2QV7Q7LDjtsS1rT-OgNL_Jj43ynk10DNzJ1sfnA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, Nov 29, 2011 at 11:21 AM, Tyler Hains <thains(at)profitpointinc(dot)com> wrote:
> # explain analyze select * from cards where card_set_id=2850 order by
> card_id limit 1;
> QUERY PLAN
> -----------------------------------------------------------------------------------------------------------------------------------------
> Limit (cost=0.00..105.19 rows=1 width=40) (actual time=6026.947..6026.948
> rows=1 loops=1)
> -> Index Scan using cards_pkey on cards (cost=0.00..2904875.38
> rows=27616 width=40) (actual time=6026.945..6026.945 rows=1 loops=1)
There's a huge disconnect here between what the query planner expects
(27k rows) and how many there are (1). Also, getting a single row
from an index should be faster than this, even if the table and index
are quite large. Have you checked for bloat on this index?
From | Date | Subject | |
---|---|---|---|
Next Message | MURAT KOÇ | 2011-11-29 20:40:12 | DDL & DML Logging doesn't work for calling functions |
Previous Message | Filip Rembiałkowski | 2011-11-29 20:00:06 | Re: Extending the volume size of the data directory volume |