From: | Nathan Boley <npboley(at)gmail(dot)com> |
---|---|
To: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
Cc: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, Ogden <lists(at)darkstatic(dot)com>, Tomas Vondra <tv(at)fuzzy(dot)cz>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Performance |
Date: | 2011-04-13 22:05:14 |
Message-ID: | BANLkTimyWkoX8Dj=4CKAjhY82ibru-An7g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
>> If you model the costing to reflect the reality on your server, good
>> plans will be chosen.
>
> Wouldn't it be "better" to derive those costs from actual performance
> data measured at runtime?
>
> Say, pg could measure random/seq page cost, *per tablespace* even.
>
> Has that been tried?
FWIW, awhile ago I wrote a simple script to measure this and found
that the *actual* random_page / seq_page cost ratio was much higher
than 4/1.
The problem is that caching effects have a large effect on the time it
takes to access a random page, and caching effects are very workload
dependent. So anything automated would probably need to optimize the
parameter values over a set of 'typical' queries, which is exactly
what a good DBA does when they set random_page_cost...
Best,
Nathan
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2011-04-13 22:15:31 | Re: Performance |
Previous Message | Kevin Grittner | 2011-04-13 21:59:39 | Re: Performance |