From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Jesper Krogh <jesper(at)krogh(dot)cc>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: reducing random_page_cost from 4 to 2 to force index scan |
Date: | 2011-05-16 15:46:45 |
Message-ID: | 16676.1305560805@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Mon, May 16, 2011 at 12:49 AM, Jesper Krogh <jesper(at)krogh(dot)cc> wrote:
>> Ok, it may not work as well with index'es, since having 1% in cache may very
>> well mean that 90% of all requested blocks are there.. for tables in should
>> be more trivial.
> Tables can have hot spots, too. Consider a table that holds calendar
> reservations. Reservations can be inserted, updated, deleted. But
> typically, the most recent data will be what is most actively
> modified, and the older data will be relatively more (though not
> completely) static, and less frequently accessed. Such examples are
> common in many real-world applications.
Yes. I'm not convinced that measuring the fraction of a table or index
that's in cache is really going to help us much. Historical cache hit
rates might be useful, but only to the extent that the incoming query
has a similar access pattern to those in the (recent?) past. It's not
an easy problem.
I almost wonder if we should not try to measure this at all, but instead
let the DBA set a per-table or per-index number to use, analogous to the
override we added recently for column n-distinct statistics ...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff | 2011-05-16 16:23:13 | Re: Using pgiosim realistically |
Previous Message | Robert Haas | 2011-05-16 14:34:12 | Re: reducing random_page_cost from 4 to 2 to force index scan |