From: | Jeff Davis <pgsql(at)j-davis(dot)com> |
---|---|
To: | Greg Stark <gsstark(at)mit(dot)edu> |
Cc: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, marcin mank <marcin(dot)mank(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: per table random-page-cost? |
Date: | 2009-10-20 00:34:20 |
Message-ID: | 1255998860.31947.189.camel@monkey-cat.sm.truviso.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, 2009-10-19 at 16:39 -0700, Greg Stark wrote:
> But the long-term strategy here I think is to actually have some way
> to measure the real cache hit rate on a per-table basis. Whether it's
> by timing i/o operations, programmatic access to dtrace, or some other
> kind of os interface, if we could know the real cache hit rate it
> would be very helpful.
Maybe it would be simpler to just get the high-order bit: is this table
likely to be completely in cache (shared buffers or os buffer cache), or
not?
The lower cache hit ratios are uninteresting: the performance difference
between 1% and 50% is only a factor of two. The higher cache hit ratios
that are lower than "almost 100%" seem unlikely: what kind of scenario
would involve a stable 90% cache hit ratio for a table?
Regards,
Jeff Davis
From | Date | Subject | |
---|---|---|---|
Next Message | Aidan Van Dyk | 2009-10-20 00:37:42 | Re: Could postgres be much cleaner if a future release skipped backward compatibility? |
Previous Message | Robert Haas | 2009-10-20 00:29:33 | Re: per table random-page-cost? |