From: | "Huang, Suya" <Suya(dot)Huang(at)au(dot)experian(dot)com> |
---|---|
To: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | werid error"index row size 3040 exceeds btree maximum, 2712" occur randomly |
Date: | 2013-10-14 03:47:21 |
Message-ID: | D83E55F5F4D99B4A9B4C4E259E6227CD9DD718@AUX1EXC01.apac.experian.local |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
OK, first, I know the reason of this error "index row size 3040 exceeds btree maximum, 2712" and know that we cannot create index on certain columns with size larger than 1/3 buffer page size.
The question is, no matter if I deleted records that caused the problem or all records of the table, the error still occurred and would disappear after a while randomly, like 1 or 2 minutes or so.
Therefore I suspect if this is a bug or any postgresql internal mechanism I was not aware would lead to this problem?
See my test as below:
pgdb=# drop table test;
DROP TABLE
pgdb=# create table test as select * from tbl_weekly;
SELECT
pgdb=# create index idx_test on test USING btree (term) WITH (fillfactor=100) WHERE catid = 1;
ERROR: index row size 3040 exceeds btree maximum, 2712
HINT: Values larger than 1/3 of a buffer page cannot be indexed.
Consider a function index of an MD5 hash of the value, or use full text indexing.
--because this is a TOAST table, so the size should be determined by pg_column_size(), not octet_length()
pgdb=# select length(term), pg_column_size(term),octet_length(term),catid from test where length(term)>=2000 order by 1;
length | pg_column_size | octet_length | catid
--------+----------------+--------------+-------
2088 | 1430 | 2088 | 80
2088 | 1430 | 2088 | 125
2088 | 1430 | 2088 | 1
2190 | 1450 | 2190 | 50
2190 | 1450 | 2190 | 1
2190 | 1450 | 2190 | 30
2205 | 1184 | 2205 | 80
2205 | 1184 | 2205 | 1
2205 | 1184 | 2205 | 100
2586 | 1894 | 2586 | 100
2586 | 1894 | 2586 | 80
2586 | 1894 | 2586 | 320
2586 | 1894 | 2586 | 1
5179 | 3028 | 5179 | 1
5179 | 3028 | 5179 | 125
5179 | 3028 | 5179 | 80
(16 rows)
--so the fix is to delete records with pg_column_size>2700, in this case, to delete records with pg_column_size=3028 (length=5179) and catid=1.
pgdb=# delete from test where length(term) =5179 and catid=1;
DELETE 1
pgdb=# select length(term), pg_column_size(term),octet_length(term),catid from test where length(term)>=2000 order by 1;
length | pg_column_size | octet_length | catid
--------+----------------+--------------+-------
2088 | 1430 | 2088 | 80
2088 | 1430 | 2088 | 1
2088 | 1430 | 2088 | 125
2190 | 1450 | 2190 | 1
2190 | 1450 | 2190 | 30
2190 | 1450 | 2190 | 50
2205 | 1184 | 2205 | 80
2205 | 1184 | 2205 | 1
2205 | 1184 | 2205 | 100
2586 | 1894 | 2586 | 80
2586 | 1894 | 2586 | 320
2586 | 1894 | 2586 | 100
2586 | 1894 | 2586 | 1
5179 | 3028 | 5179 | 125
5179 | 3028 | 5179 | 80
(15 rows)
--even deleted the record that cause the problem, the index creation SQL fail again
pgdb=# create index idx_test on test USING btree (term) WITH (fillfactor=100) WHERE catid = 1;
ERROR: index row size 3040 exceeds btree maximum, 2712
HINT: Values larger than 1/3 of a buffer page cannot be indexed.
Consider a function index of an MD5 hash of the value, or use full text indexing.
pgdb=# vacuum full analyze test;
VACUUM
pgdb=# create index idx_test on test USING btree (term) WITH (fillfactor=100) WHERE catid = 1;
ERROR: index row size 3040 exceeds btree maximum, 2712
HINT: Values larger than 1/3 of a buffer page cannot be indexed.
Consider a function index of an MD5 hash of the value, or use full text indexing.
-- After a while, run index creation sql again, it succeeded.
pgdb=# create index idx_test on test USING btree (term) WITH (fillfactor=100) WHERE catid = 1;
CREATE INDEX
From | Date | Subject | |
---|---|---|---|
Next Message | Philipp Kraus | 2013-10-14 06:18:16 | trigger without trigger call |
Previous Message | Tomas Vondra | 2013-10-14 00:17:19 | Re: Need some help on Performance 9.0.4 |