From: | Yohanes Santoso <pgsql-hackers(at)microjet(dot)ath(dot)cx> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: determining random_page_cost value |
Date: | 2005-10-25 20:37:34 |
Message-ID: | 87d5ltcoz5.fsf@microjet.ath.cx |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Josh Berkus <josh(at)agliodbs(dot)com> writes:
>> I tested the db files residing on a software RAID-1 composed of 2 IDE
>> 7200rpm drives on linux 2.6.12.
>
> FWIW, most performance-conscious users will be using a SCSI RAID
> array.
No worry, I'm not out to squeeze every little juice from a particular
installation, which in this case is my home computer. I am interested
in automating estimation of a suitable RPC value for a given
installation.
> Well, it's actually calculating the cost ratio of pulling non-sequential
> random *rows* from the db files against pulling sequential blocks.
Then running it against the db files should yield better estimation
than on sequential pages.
>> On dbases smaller (calculated from du <dbase_dir>)than 500M, I got a
>> ratio (random over sequential time) of 4.5:1. A 3.0GB dbase has a
>> ratio of 10:1. On a 3GB contiguous file, the ratio is about 4:1.
>
> All of this goes to uphold Tom's general assertion that the default of 4 is
> more or less correct
Doesn't this show that 4:1 is a pretty optimistic value considering
that no long-running db files are fragmentation-free?
>but the calculation in which we're using that number is
> not.
The calculation inside the planner, IOW, how the planner uses the RPC
value?
Thanks,
YS.
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2005-10-25 20:52:37 | Re: The use of (mb)print.c from psql in the scripts directory |
Previous Message | Alvaro Herrera | 2005-10-25 20:33:45 | Re: Replaying archived wal files after a dump restore? |