Nathan Boley <npboley(at)gmail(dot)com> wrote:
> The problem is that caching effects have a large effect on the
> time it takes to access a random page, and caching effects are
> very workload dependent. So anything automated would probably need
> to optimize the parameter values over a set of 'typical' queries,
> which is exactly what a good DBA does when they set
> random_page_cost...
Another database product I've used has a stored procedure you can
run to turn on monitoring of workload, another to turn it off and
report on what happened during the interval. It drags performance
enough that you don't want to leave it running except as a tuning
exercise, but it does produce very detailed statistics and actually
offers suggestions on what you might try tuning to improve
performance. If someone wanted to write something to deal with this
issue, that seems like a sound overall strategy.
-Kevin