From: | Michael Renner <michael(dot)renner(at)amd(dot)co(dot)at> |
---|---|
To: | Gregory Stark <stark(at)enterprisedb(dot)com> |
Cc: | Postgres <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: How is random_page_cost=4 ok? |
Date: | 2008-10-10 17:53:40 |
Message-ID: | 48EF96A4.2050101@amd.co.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Gregory Stark schrieb:
> But with your numbers things look even weirder. With a 90MB/s sequential speed
> (91us) and 9ms seek latency that would be a random_page_cost of nearly 100!
Looks good :). If you actually want to base something on Real World
numbers I'd suggest that we collect them beforehand from existing
setups. I was introduced to IOmeter [1] at an HP performance course
which is a nice GUI Tool which allows you to define workloads to your
likings and test it against given block devices, unfortunately it's
Windows only. fio [2] and Iozone [3] should do the same for the
Unix-World, without the "nice" and "GUI" parts ;).
For improving the model - in what situations would we benefit from a
more accurate model here?
Is it correct that this is only relevant for large (if not huge) tables
which border on (or don't fit in) effective_cache_size (and respectively
- the OS Page cache)?
And we need the cost to decide between a sequential, index (order by,
small expected result set) and a bitmap index scan?
Speaking of bitmap index/heap scans - are those counted against seq or
random_page_cost?
regards,
michael
[1] http://www.iometer.org/
[2] http://freshmeat.net/projects/fio/
[3] http://www.iozone.org/
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Smith | 2008-10-10 18:12:57 | Re: How is random_page_cost=4 ok? |
Previous Message | Jim Cox | 2008-10-10 17:13:16 | Re: TODO item: adding VERBOSE option to CLUSTER [with patch] |