From: | Marcin Mańk <marcin(dot)mank(at)gmail(dot)com> |
---|---|
To: | Tyler Hains <thains(at)profitpointinc(dot)com> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Query Optimizer makes a poor choice |
Date: | 2011-12-02 00:50:01 |
Message-ID: | CAK61fk6a6NF9pW4tO0_UJORNGfLf13hHPaFgD71S5XsA5Pa4gw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, Nov 29, 2011 at 7:21 PM, Tyler Hains <thains(at)profitpointinc(dot)com> wrote:
> # explain analyze select * from cards where card_set_id=2850 order by
> card_id limit 1;
> QUERY PLAN
> -----------------------------------------------------------------------------------------------------------------------------------------
> Limit (cost=0.00..105.19 rows=1 width=40) (actual time=6026.947..6026.948
> rows=1 loops=1)
> -> Index Scan using cards_pkey on cards (cost=0.00..2904875.38
> rows=27616 width=40) (actual time=6026.945..6026.945 rows=1 loops=1)
> Filter: (card_set_id = 2850)
> Total runtime: 6026.985 ms
> (4 rows)
>
I believe this is the old problem of the planner expecting that the
card_set_id's are randomly distributed over the card_ids . This is not
the case, I guess?
The planner expects to quickly hit a matching record while scanning
the primary key, an there is a nasty surprise.
It seems there is no perfect solution, things You might want to try:
-fooling with random_page_cost/seq_tuple_cost/work_mem
-"order by card_id-1"
-an index on (card_set_id, card_id)
Greetings
Marcin
From | Date | Subject | |
---|---|---|---|
Next Message | Oleg Serov | 2011-12-02 01:08:10 | Postgresql + corrupted disk = data loss. (Need help for database recover) |
Previous Message | Jay Levitt | 2011-12-02 00:01:14 | Re: psql -1 with multiple files? |