From: | "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com> |
---|---|
To: | Manfred Koizar <mkoi-pg(at)aon(dot)at> |
Cc: | <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Correlation in cost_index() |
Date: | 2002-10-03 16:45:08 |
Message-ID: | Pine.LNX.4.33.0210031040510.5705-100000@css120.ihs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, 3 Oct 2002, Manfred Koizar wrote:
> On Wed, 2 Oct 2002 14:07:19 -0600 (MDT), "scott.marlowe"
> <scott(dot)marlowe(at)ihs(dot)com> wrote:
> >I'd certainly be willing to do some testing on my own data with them.
>
> Great!
>
> >Gotta patch?
>
> Not yet.
>
> > I've found that when the planner misses, sometimes it misses
> >by HUGE amounts on large tables, and I have been running random page cost
> >at 1 lately, as well as running cpu_index_cost at 1/10th the default
> >setting to get good results.
>
> May I ask for more information? What are your settings for
> effective_cache_size and shared_buffers? And which version are you
> running?
I'm running 7.2.2 in production and 7.3b2 in testing.
effective cache size is the default (i.e. commented out)
shared buffers are at 4000.
I've found that increasing shared buffers past 4000 (32 megs) to 16384
(128 Megs) has no great effect on my machine's performance, but I've never
really played with effective cache size.
I've got a couple of queries that join a 1M+row table to itself and to a
50k row table, and the result sets are usually <100 rows at a time.
Plus some other smaller datasets that return larger amounts (i.e.
sometimes all rows) of data generally.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2002-10-03 16:46:09 | Trigger regression test output |
Previous Message | Manfred Koizar | 2002-10-03 16:44:09 | Re: Large databases, performance |