From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Henry B(dot) Hotz" <hotz(at)jpl(dot)nasa(dot)gov> |
Cc: | pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: [HACKERS] Some notes on optimizer cost estimates |
Date: | 2000-01-24 18:13:45 |
Message-ID: | 25997.948737625@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
"Henry B. Hotz" <hotz(at)jpl(dot)nasa(dot)gov> writes:
>> I don't know how to do that --- AFAICS, getting trustworthy numbers by
>> measurement would require hundreds of meg of temporary disk space and
>> probably hours of runtime. (A smaller test would be completely
>> corrupted by kernel disk caching effects.)
> Getting a rough estimate of CPU speed is trivial. Getting a rough estimate
> of sequential disk access shouldn't be too hard, though you would need to
> make sure it didn't give the wrong answer if you ran configure twice in a
> row or something. Getting a rough estimate of disk access for a single
> non-sequential disk page also shouldn't be too hard with the same caviats.
In practice this would be happening at initdb time, not configure time,
since it'd be a lot easier to do it in C code than in a shell script.
But that's a detail. I'm still not clear on how you can wave away the
issue of kernel disk caching --- if you don't use a test file that's
larger than the disk cache, ISTM you risk getting a number that's
entirely devoid of any physical I/O at all.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Ed Loehr | 2000-01-24 18:13:56 | Re: [HACKERS] Well, then you keep your darn columns |
Previous Message | Brian E Gallew | 2000-01-24 18:11:10 | Re: [HACKERS] Some notes on optimizer cost estimates |