From: | Dan Armbrust <daniel(dot)armbrust(dot)list(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | index row size exceeds btree maximum, 2713 - Solutions? |
Date: | 2005-07-18 19:44:26 |
Message-ID: | 42DC069A.60003@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I'm trying to load some data into PostgreSQL 8.0.3, and I got the error
message "index row size 2904 exceeds btree maximum, 2713". After a
bunch of searching, I believe that I am getting this error because a
value that I am indexing is longer than ~ 1/3 of the block size - or the
BLCKSZ variable in the src/include/pg_config_manual.h file.
Am I correct so far?
I need to fix this problem. I cannot change the indexed columns. I
cannot shorten the data value. And I cannot MD5 it, or any of those
hashing types of solutions that I saw a lot while searching.
Is there a variable I can set somewhere, so that postgresql would just
truncate the value to the max length that the index can handle when it
goes to enter it into the index, instead of failing with an error? I
would be fine with not having this particular row fully indexed, so long
as I could still retrieve the full data value.
The other solution that I saw was to modify the BLCKSZ variable. From
what I saw, it appears that to change that variable, I would need to
dump my databases out, recompile everything, and then reload them from
scratch. Is this correct?
Currently the BLCKSZ variable is set to 8192. What are the
performance/disk usage/other? implications of doubling this value, to 16384?
Any other suggestions in dealing with this problem?
Thanks,
Dan
From | Date | Subject | |
---|---|---|---|
Next Message | Andrus | 2005-07-18 19:45:27 | Re: How to create unique constraint on NULL columns |
Previous Message | Andrus | 2005-07-18 19:19:01 | Re: How to find the number of rows deleted |