keeping an index in memory

From: Rajarshi Guha <rguha(at)indiana(dot)edu>
To: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: keeping an index in memory
Date: 2007-10-21 01:19:21
Message-ID: BDFAFEDA-3125-49A2-B963-47538838929B@indiana.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi, relating to my previous queries on doing spatial searches on 10M
rows, it seems that most of my queries return within 2 minutes.
Generally this is not too bad, though faster is always better.

Interestingly, it appears that the CUBE index for the table in
question is about 3GB (the table itself is about 14GB). Not knowing
the details of the postgres internals, I assume that when a query
tries to use the index, it will need to read a 3GB file. Is this a
correct assumption?

In such a situation, is there a way to keep the index in memory? My
machine has 8GB installed and currently has about 7.4GB free RAM (64
bit linux 2.6.9)

A side effect of the size of the index is that if I do a query that
performs a seq scan (say using cube_distance) it takes longer than
when an index is used, but not significantly longer. And this is on a
10M row table.

What strategies do people follow when the index becomes very big?

Thanks,

-------------------------------------------------------------------
Rajarshi Guha <rguha(at)indiana(dot)edu>
GPG Fingerprint: 0CCA 8EE2 2EEB 25E2 AB04 06F7 1BB9 E634 9B87 56EE
-------------------------------------------------------------------
Does Ramanujan know Polish?
-- E.B. Ross

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Richard Broersma Jr 2007-10-21 02:24:05 Re: uniquely indexing Celko's nested set model
Previous Message M. van Egmond 2007-10-21 01:08:52 Inheritance foreign key unexpected behaviour