From: | Vincent van Leeuwen <pgsql(dot)spam(at)vinz(dot)nl> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: tweaking costs to favor nestloop |
Date: | 2003-06-13 12:39:22 |
Message-ID: | 20030613123922.GX1535@md2.mediadesign.nl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 2003-06-11 16:17:53 -0400, Tom Lane wrote:
> Vincent van Leeuwen <pgsql(dot)spam(at)vinz(dot)nl> writes:
> > I'm unable to tweak the various _cost settings in such a way that attached
> > query will use the right plan.
>
> You aren't going to be able to. You've already overshot a reasonable
> random_page_cost setting --- to judge by the relative actual costs of
> the merge and hash join, a value somewhere around 3 is appropriate for
> your setup. (Assuming I did the math right --- if you set it to 3,
> do you get a ratio of merge and hash estimated costs that agrees with
> the ratio of actual runtimes?)
>
Well, random_page_cost is where it is right now because for a number of other
queries it seems to give the best result. Specifically, 1.25 seems to be the
sweet spot where a number of queries that were using seqscans but should use
indexscans started to use indexscans. Tweaking the cpu_index_tuple_cost by
rather large margins didn't seem to have any effect on the calculated costs.
Going back to a setting of 3 will hurt overall performance, unless we can
still get those other queries to use the right plan by tweaking other config
parameters.
How did you calculate the value of 3?
Another problem we've noticed is that on an idle database certain queries are
better off using an indexscan than a seqscan, something which the planner
already wanted to do. But when the load on the database gets a lot higher,
indexscans are consistently slower than seqscans (same query, same
parameters). So we had to dick around a bit to favor seqscans more for those
queries (we set cpu_operator_cost a lot lower to favor a seqscan+sort over a
(reverse? dunno anymore) indexscan).
> The problem here is that the costing of the repeated inner index scans
> isn't realistic: 35417 probes into "auth" are clearly taking much less
> than 35417 times what a single probe could be expected to take. We
> talked about how repeated scans would win from caching of the upper
> btree levels, but I think there's more to it than that. It occurs to me
> that the probes you are making are probably not random and uncorrelated.
> They are driven by the values of reportuser.idreporter ... is it fair
> to guess that most of the reportuser rows link to just a small fraction
> of the total auth population? If so, the caching could be eliminating
> most of the reads, not just the upper btree levels, because we're
> mostly hitting only small parts of the index and auth tables.
>
Exactly. I think the 'auth' table is already completely in kernel
filesystemcache to begin with, and probably largely in shared_buffers too,
since it's a small table that gets hit a lot. Especially on it's primary key,
which we use here.
> I'm beginning to think that the only reasonable way to model this is to
> cost the entire nestloop join as a unit, so that we have access to
> statistics about the outer table as well as the indexed table. That
> would give us a shot at estimating how much of the index is likely to
> get touched.
>
> As of 7.3 I think all you can do is force nestloop by disabling the
> other two join types.
>
Does 7.4 already have changes in this area that will affect this query?
Vincent van Leeuwen
Media Design - http://www.mediadesign.nl/
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-06-13 14:07:42 | Re: tweaking costs to favor nestloop |
Previous Message | Tomas Szepe | 2003-06-13 05:34:59 | Re: db growing out of proportion |