Re: Unexpected expensive index scan

From: Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com>
To: Jake Nielsen <jake(dot)k(dot)nielsen(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Unexpected expensive index scan
Date: 2016-09-30 22:19:59
Message-ID: 7abde325-3452-a1e1-e288-ee33408d7708@BlueTreble.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 9/28/16 1:11 PM, Jake Nielsen wrote:
> Beautiful! After changing the random_page_cost to 1.0 the original query
> went from ~3.5s to ~35ms. This is exactly the kind of insight I was
> fishing for in the original post. I'll keep in mind that the query
> planner is very tunable and has these sorts of hardware-related
> trade-offs in the future. I can't thank you enough!

Be careful with setting random_page_cost to exactly 1... that tells the
planner that an index scan has nearly the same cost as a sequential
scan, which is absolutely never the case, even with the database in
memory. 1.1 or maybe even 1.01 is probably a safer bet.

Also note that you can set those parameters within a single session, as
well as within a single transaction. So if you need to force a different
setting for a single query, you could always do

BEGIN;
SET LOCAL random_page_cost = 1;
SELECT ...
COMMIT; (or rollback...)
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532) mobile: 512-569-9461

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Jim Nasby 2016-09-30 22:49:02 Re: Millions of tables
Previous Message Joe Proietti 2016-09-30 13:44:40 Re: MYSQL Stats