From: | Josh Berkus <josh(at)agliodbs(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Odd problem with performance in duplicate database |
Date: | 2003-08-11 23:51:01 |
Message-ID: | 200308111651.01597.josh@agliodbs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Tom,
> Let's see the pg_stats rows for case_clients in both databases. The
> entries for trial_groups might be relevant too.
My reading is that the case is "borderline"; that is, becuase the correlation
is about 10-20% higher on the test database (since it was restored "clean"
from backup) the planner is resorting to a seq scan.
At which point the spectre of random_page_cost less than 1.0 rears its ugly
head again. Because the planner seems to regard this as a borderline case,
but it's far from borderline ... index scan takes 260ms, seq scan takes
244,000ms. Yet my random_page_cost is set pretty low already, at 1.5.
It seems like I'd have to set random_page_cost to less than 1.0 to make sure
that the planner never used a seq scan. Which kinda defies the meaning of
the setting.
*sigh* wish the client would pay for an upgrade ....
--
-Josh Berkus
Aglio Database Solutions
San Francisco
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-08-11 23:59:36 | Re: Odd problem with performance in duplicate database |
Previous Message | Tom Lane | 2003-08-11 23:46:21 | Re: Odd problem with performance in duplicate database |