From: | Rajarshi Guha <rguha(at)indiana(dot)edu> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | indexing for query speed - index row size exceeding btree maximum |
Date: | 2006-10-05 16:52:08 |
Message-ID: | 1160067128.7686.9.camel@localhost |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi, I have a table with 8M rows. One of the fields is of type text and I
wanted to create an index on it to improve query times. Now this field
is a single string (ie not a piece of normal text) and is really an
identifier (< 100 chars). I envisage queries like
select cid from tableName where fieldName = 'XYZ ... ';
So I had done something like
create index someName on tableName (fieldName);
However this returned with an error:
ERROR: index row size 2848 exceeds btree maximum, 2713
I noted that some other posters have faced this problem, but most of the
replies requested info regarding what the index was to be used for. I
also tried following one example where the index was created on the MD5
hash of the field being indexed - but this did not make my query times
significantly faster.
Is there a way for me to generate an index on this field so that my
query times can be reduced?
Thanks,
-------------------------------------------------------------------
Rajarshi Guha <rguha(at)indiana(dot)edu>
GPG Fingerprint: 0CCA 8EE2 2EEB 25E2 AB04 06F7 1BB9 E634 9B87 56EE
-------------------------------------------------------------------
A method of solution is perfect if we can forsee from the start,
and even prove, that following that method we shall attain our aim.
-- Leibnitz
From | Date | Subject | |
---|---|---|---|
Next Message | brian | 2006-10-05 17:17:41 | trouble with setof record return |
Previous Message | Chris Mair | 2006-10-05 16:43:11 | Re: storing transactions |