From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Curt Sampson <cjs(at)cynic(dot)net> |
Cc: | nickf(at)ontko(dot)com, Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, Ray Ontko <rayo(at)ontko(dot)com> |
Subject: | Re: Script to compute random page cost |
Date: | 2002-09-10 03:43:08 |
Message-ID: | 16319.1031629388@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Curt Sampson <cjs(at)cynic(dot)net> writes:
> On Mon, 9 Sep 2002, Tom Lane wrote:
>> ... We are trying to measure the behavior when kernel
>> caching is not helpful; if the database fits in RAM then you are just
>> naturally going to get random_page_cost close to 1, because the kernel
>> will avoid doing any I/O at all.
> Um...yeah; another reason to use randread against a raw disk device.
> (A little hard to use on linux systems, I bet, but works fine on
> BSD systems.)
Umm... not really; surely randread wouldn't know anything about
read-ahead logic?
The reason this is a difficult topic is that we are trying to measure
certain kernel behaviors --- namely readahead for sequential reads ---
and not others --- namely caching, because we have other parameters
of the cost models that purport to deal with that.
Mebbe this is an impossible task and we need to restructure the cost
models from the ground up. But I'm not convinced of that. The fact
that a one-page shell script can't measure the desired quantity doesn't
mean we can't measure it with more effort.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Curt Sampson | 2002-09-10 04:19:46 | Re: Script to compute random page cost |
Previous Message | Tom Lane | 2002-09-10 03:27:13 | Re: [JDBC] problem with new autocommit config parameter and jdbc |