From: | Greg Smith <greg(at)2ndquadrant(dot)com> |
---|---|
To: | Jim Nasby <jim(at)nasby(dot)net> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Jesper Krogh <jesper(at)krogh(dot)cc>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: reducing random_page_cost from 4 to 2 to force index scan |
Date: | 2011-05-19 03:00:31 |
Message-ID: | 4DD487CF.8010003@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Jim Nasby wrote:
> I think the challenge there would be how to define the scope of the hot-spot. Is it the last X pages? Last X serial values? Something like correlation?
>
> Hmm... it would be interesting if we had average relation access times for each stats bucket on a per-column basis; that would give the planner a better idea of how much IO overhead there would be for a given WHERE clause
You've already given one reasonable first answer to your question here.
If you defined a usage counter for each histogram bucket, and
incremented that each time something from it was touched, that could
lead to a very rough way to determine access distribution. Compute a
ratio of the counts in those buckets, then have an estimate of the total
cached percentage; multiplying the two will give you an idea how much of
that specific bucket might be in memory. It's not perfect, and you need
to incorporate some sort of aging method to it (probably weighted
average based), but the basic idea could work.
--
Greg Smith 2ndQuadrant US greg(at)2ndQuadrant(dot)com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2011-05-19 14:53:21 | Re: reducing random_page_cost from 4 to 2 to force index scan |
Previous Message | Scott Carey | 2011-05-18 17:41:40 | Re: hash semi join caused by "IN (select ...)" |