From: | Pierre-Frédéric Caillaud <lists(at)boutiquenumerique(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: preloading indexes |
Date: | 2004-11-03 19:50:04 |
Message-ID: | opsgwmpq0fcq72hf@musicbox |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
--
uh, you can always load a table in cache by doing a seq scan on it...
like select count(1) from table or something... this doesn't work for
indexes of course, but you can always look in the system catalogs, find
the filename for the index, then just open() it from an external program
and read it without caring for the data... it'll save you the seeks in the
index... of course you'll have problems with file permissions etc, not
mentioning security, locking, etc, etc, etc, is that worth the trouble ?
On Wed, 3 Nov 2004 14:35:28 -0500, Andrew Sullivan <ajs(at)crankycanuck(dot)ca>
wrote:
> On Wed, Nov 03, 2004 at 12:12:43PM -0700, stuff(at)opensourceonline(dot)com
> wrote:
>> That's correct - I'd like to be able to keep particular indexes in RAM
>> available all the time
>
> If these are queries that run frequently, then the relevant cache
> will probably remain populated[1]. If they _don't_ run frequently, why
> do you want to force the memory to be used to optimise something that
> is uncommon? But in any case, there's no mechanism to do this.
>
> A
>
> [1] there are in fact limits on the caching: if your data set is
> larger than memory, for instance, there's no way it will all stay
> cached. Also, VACUUM does nasty things to the cache. It is hoped
> that nastiness is fixed in 8.0.
>
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-11-03 19:55:17 | Re: preloading indexes |
Previous Message | Andrew Sullivan | 2004-11-03 19:35:28 | Re: preloading indexes |