Re: Query planner issue with preferring primary key over a better index when using ORDER BY and LIMIT

From: Michael Lewis <mlewis(at)entrata(dot)com>
To: Alan Hodgson <ahodgson(at)lists(dot)simkin(dot)ca>
Cc: Postgres General <pgsql-general(at)postgresql(dot)org>
Subject: Re: Query planner issue with preferring primary key over a better index when using ORDER BY and LIMIT
Date: 2021-12-06 17:18:11
Message-ID: CAHOFxGrGWyDQQUSWif_P9XVUKWUng4c7u7CbWGsbXrcXrurxEw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

What is your default_statistics_target and how accurate is that estimate of
5668 rows? What is random_page_cost set to by the way?

More importantly, what is the better plan that you'd like the planner to
use with your existing indexes? It would seem logical to me to scan for the
matching shipment_import_id if the estimate is saying 5868 out of 29
million should match and then sort and only get the smallest ID. Doing an
index scan on ID and looking up in the table to see if shipment_import_id
matches when the planner expects that to be about a .0001 chance... I can't
imagine that plan performing well at all.

Certainly a composite index would be very helpful here. Using explain
analyze and sharing the output would give more info to go on.

>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Rob Sargent 2021-12-06 17:19:09 Re: Query planner issue with preferring primary key over a better index when using ORDER BY and LIMIT
Previous Message Alan Hodgson 2021-12-06 17:03:02 Query planner issue with preferring primary key over a better index when using ORDER BY and LIMIT