From: | Kevin Murphy <murphy(at)genome(dot)chop(dot)edu> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: speeding up a query on a large table |
Date: | 2005-08-18 02:00:40 |
Message-ID: | 4303EBC8.2090503@genome.chop.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Mike Rylander wrote:
>On 8/17/05, Manfred Koizar <mkoi-pg(at)aon(dot)at> wrote:
>
>
>>On Mon, 25 Jul 2005 17:50:55 -0400, Kevin Murphy
>><murphy(at)genome(dot)chop(dot)edu> wrote:
>>
>>
>>>and because the number of possible search terms is so large, it
>>>would be nice if the entire index could somehow be preloaded into memory
>>>and encouraged to stay there.
>>>
>>>
>>You could try to copy the relevant index
>>file(s) to /dev/null to populate the OS cache ...
>>
>>
>
>That actually works fine. When I had big problems with a large GiST
>index I just used cat to dump it at /dev/null and the OS grabbed it.
>Of course, that was on linux so YMMV.
>
>
>
Thanks, Manfred & Mike. That is a very nice solution. And just for the
sake of the archive ... I can find the filename of the relevant index or
table file name(s) by finding pg_class.relfilenode where
pg_class.relname is the name of the entity, then doing, e.g.: sudo -u
postgres find /usr/local/pgsql/data -name "somerelfilenode*".
-Kevin Murphy
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Fuhr | 2005-08-18 02:26:04 | Re: trigger question |
Previous Message | Bruno Wolff III | 2005-08-18 01:59:09 | Re: Field order |