From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Bryan Murphy <bmurphy1976(at)gmail(dot)com> |
Cc: | Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Help Me Understand Why I'm Getting a Bad Query Plan |
Date: | 2009-03-26 02:15:14 |
Message-ID: | 11650.1238033714@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Bryan Murphy <bmurphy1976(at)gmail(dot)com> writes:
> I tried that already, but I decided to try again in case I messed up
> something last time. Here's what I ran. As you can see, it still
> chooses to do a sequential scan. Am I changing the stats for those
> columns correctly?
I think what you should be doing is messing with the cost parameters
... and not in the direction you tried before. I gather from
effective_cache_size = 12GB
that you have plenty of RAM on this machine. If the tables involved
are less than 1GB then it's likely that you are operating in a fully
cached condition, and the default cost parameters are not set up for
that. You want to be charging a lot less for page accesses relative to
CPU effort. Try reducing both seq_page_cost and random_page_cost to 0.5
or even 0.1. You'll need to watch your other queries to make sure
nothing gets radically worse though ...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Bryan Murphy | 2009-03-26 03:19:13 | Re: Help Me Understand Why I'm Getting a Bad Query Plan |
Previous Message | Stef Telford | 2009-03-26 00:50:20 | Re: Raid 10 chunksize |