From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Dennis Björklund <db(at)zigo(dot)dhs(dot)org> |
Cc: | Fabian Kreitner <fabian(dot)kreitner(at)ainea-ag(dot)de>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: index / sequential scan problem |
Date: | 2003-07-18 13:24:58 |
Message-ID: | 23376.1058534698@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
=?ISO-8859-1?Q?Dennis_Bj=F6rklund?= <db(at)zigo(dot)dhs(dot)org> writes:
> On Fri, 18 Jul 2003, Fabian Kreitner wrote:
>> Adjusting the cpu_tuple_cost to 0.042 got the planner to choose the index.
> Doesn't sound very good and it will most likely make other queries slower.
Seems like a reasonable approach to me --- certainly better than setting
random_page_cost to physically nonsensical values.
In a fully-cached situation it's entirely reasonable to inflate the
various cpu_xxx costs, since by assumption you are not paying the normal
price of physical disk I/O. Fetching a page from kernel buffer cache
is certainly cheaper than getting it off the disk. But the CPU costs
involved in processing the page contents don't change. Since our cost
unit is defined as 1.0 = one sequential page fetch, you have to increase
the cpu_xxx numbers instead of reducing the I/O cost estimate.
I would recommend inflating all the cpu_xxx costs by the same factor,
unless you have evidence that they are wrong in relation to each other.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Magnus Hagander | 2003-07-18 13:25:54 | Re: Hardware performance |
Previous Message | Rajesh Kumar Mallah | 2003-07-18 12:51:21 | Yet another slow join query.. |