From: | Julien Cigar <jcigar(at)ulb(dot)ac(dot)be> |
---|---|
To: | Kevin Grittner <kgrittn(at)ymail(dot)com> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: slow joins? |
Date: | 2013-04-06 14:43:50 |
Message-ID: | 516034A6.3030703@ulb.ac.be |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 04/06/2013 16:22, Kevin Grittner wrote:
> Julien Cigar <jcigar(at)ulb(dot)ac(dot)be> wrote:
>
>> try to increase cpu_tuple_cost to 0.1
> I agree that's on the right track, but possibly an overly blunt
> tool for the job. The following settings are likely to need
> adjustment, IMO:
>
> effective_cache_size: People often set this to somewhere in the
> range of 50% to 75% of the RAM on the machine. This setting does
> not allocate RAM, but tells the planner how likely it is to find
> things in cache for, say, repeated index access. A higher setting
> makes the random access involved in index scans seem like less of a
> problem.
I agree that the very first thing to check is effective_cache_size
> random_page_cost: You seem to have a very high cache hit ratio,
> between shared_buffers and the OS cache. To model this you should
> decrease random_page_cost to something just above seq_page_cost or
> equal to it. To reflect the relatively low cost of reading a page
> from the OS cache (compared to actually reading from disk) you
> might want to reduce both of these below 1. 0.1 is a not-uncommon
> setting for instances with the active portion of the database
> well-cached.
I would first raise cpu_tuple_cost rather than touch random_page_cost.
Raising cpu_tuple_cost is
a more "fine-grained method" for discouraging seqscans than
random_page_cost is.
> cpu_tuple_cost: I always raise this; I think our default is just
> too low to accurately model the cost of reading a row, compared to
> the cost factors used for other things. In combination with the
> above changes I've never had to go beyond 0.03 to get a good plan.
> I've pushed it to 0.05 to see if that put me near a tipping point
> for a bad plan, and saw no ill effects. I've never tried higher
> than 0.05, so I can't speak to that.
Yep, default cpu_tuple_cost is just too low ..
> In any event, your current cost settings aren't accurately modeling
> actual costs in your environment for your workload. You need to
> adjust them.
>
> One of the estimates was off, so increasing the statistics sample
> size might help, but I suspect that you need to make adjustments
> like the above in any event.
>
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2013-04-06 15:26:31 | Re: What happens between end of explain analyze and end of query execution ? |
Previous Message | Kevin Grittner | 2013-04-06 14:22:29 | Re: slow joins? |