From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | Cédric Villemain <cedric(dot)villemain(at)dalibo(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org, marcin mank <marcin(dot)mank(at)gmail(dot)com> |
Subject: | Re: per table random-page-cost? |
Date: | 2009-10-22 18:01:23 |
Message-ID: | 407d949e0910221101te247c85me4e1027d8090d405@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Oct 22, 2009 at 8:16 AM, Cédric Villemain
<cedric(dot)villemain(at)dalibo(dot)com> wrote:
> You can have situation where you don't want some tables go to OS memory
I don't think this is a configuration we want to cater for. The
sysadmin shouldn't be required to understand the i/o pattern of
postgres. He or she cannot know whether the database will want to
access the same blocks twice for internal algorithms that isn't
visible from the user point of view.
The scenarios where you might want to do this would be if you know
there are tables which are accessed very randomly with no locality and
very low cache hit rates. I think the direction we want to head is
towards making sure the cache manager is automatically resistant to
such data.
There is another use case which perhaps needs to be addressed: if the
user has some queries which are very latency sensitive and others
which are not latency sensitive. In that case it might be very
important to keep the pages of data used by the high priority queries
in the cache. That's something we should have a high level abstract
interface for, not depend on low level system features.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2009-10-22 18:12:16 | Re: EvalPlanQual seems a tad broken |
Previous Message | Jeff Davis | 2009-10-22 17:43:36 | Re: B-tree leaf node structure |