From: | Jesper Krogh <jesper(at)krogh(dot)cc> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: reducing random_page_cost from 4 to 2 to force index scan |
Date: | 2011-05-16 04:49:20 |
Message-ID: | 4DD0ACD0.1090801@krogh.cc |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 2011-05-16 06:41, Jesper Krogh wrote:
> On 2011-05-16 03:18, Greg Smith wrote:
>> You can't do it in real-time. You don't necessarily want that to
>> even if it were possible; too many possibilities for nasty feedback
>> loops where you always favor using some marginal index that happens
>> to be in memory, and therefore never page in things that would be
>> faster once they're read. The only reasonable implementation that
>> avoids completely unstable plans is to scan this data periodically
>> and save some statistics on it--the way ANALYZE does--and then have
>> that turn into a planner input.
>
> Would that be feasible? Have process collecting the data every
> now-and-then
> probably picking some conservative-average function and feeding
> it into pg_stats for each index/relation?
>
> To me it seems like a robust and fairly trivial way to to get better
> numbers. The
> fear is that the OS-cache is too much in flux to get any stable
> numbers out
> of it.
Ok, it may not work as well with index'es, since having 1% in cache may very
well mean that 90% of all requested blocks are there.. for tables in should
be more trivial.
--
Jesper
From | Date | Subject | |
---|---|---|---|
Next Message | Adarsh Sharma | 2011-05-16 05:39:57 | Why query takes soo much time |
Previous Message | Jesper Krogh | 2011-05-16 04:41:58 | Re: reducing random_page_cost from 4 to 2 to force index scan |