From: | "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Dennis Björklund <db(at)zigo(dot)dhs(dot)org>, Fabian Kreitner <fabian(dot)kreitner(at)ainea-ag(dot)de>, <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: index / sequential scan problem |
Date: | 2003-07-18 14:09:22 |
Message-ID: | Pine.LNX.4.33.0307180808530.1889-100000@css120.ihs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Fri, 18 Jul 2003, Tom Lane wrote:
> =?ISO-8859-1?Q?Dennis_Bj=F6rklund?= <db(at)zigo(dot)dhs(dot)org> writes:
> > On Fri, 18 Jul 2003, Fabian Kreitner wrote:
> >> Adjusting the cpu_tuple_cost to 0.042 got the planner to choose the index.
>
> > Doesn't sound very good and it will most likely make other queries slower.
>
> Seems like a reasonable approach to me --- certainly better than setting
> random_page_cost to physically nonsensical values.
>
> In a fully-cached situation it's entirely reasonable to inflate the
> various cpu_xxx costs, since by assumption you are not paying the normal
> price of physical disk I/O. Fetching a page from kernel buffer cache
> is certainly cheaper than getting it off the disk. But the CPU costs
> involved in processing the page contents don't change. Since our cost
> unit is defined as 1.0 = one sequential page fetch, you have to increase
> the cpu_xxx numbers instead of reducing the I/O cost estimate.
>
> I would recommend inflating all the cpu_xxx costs by the same factor,
> unless you have evidence that they are wrong in relation to each other.
And don't forget to set effective_cache_size. It's the one I missed for
the longest when I started.
From | Date | Subject | |
---|---|---|---|
Next Message | Nick Fankhauser | 2003-07-18 14:31:46 | File systems (RE: Sanity check requested) |
Previous Message | Tom Lane | 2003-07-18 13:31:07 | Re: Clearing rows periodically |