From: | "Huang, Suya" <Suya(dot)Huang(at)au(dot)experian(dot)com> |
---|---|
To: | Tomas Vondra <tv(at)fuzzy(dot)cz>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: werid error"index row size 3040 exceeds btree maximum, 2712" occur randomly |
Date: | 2013-10-15 01:44:14 |
Message-ID: | D83E55F5F4D99B4A9B4C4E259E6227CD9DDF75@AUX1EXC01.apac.experian.local |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Thanks Tomas!
However, in the example I sent, I already did a vacuum full right after deleted the rows causing problem, before created the index and got an error even the table is vacuumed. Note, the table is I temporarily created using create table as select *... so no other people is accessing that table, except me for the testing purpose.
Any ideas? And today, while I did the same thing, I can create index on the table right after I deleted the rows causing problem, without vacuum.
Anything I missed here?
Thanks,
Suya
-----Original Message-----
From: pgsql-general-owner(at)postgresql(dot)org [mailto:pgsql-general-owner(at)postgresql(dot)org] On Behalf Of Tomas Vondra
Sent: Tuesday, October 15, 2013 7:09 AM
To: pgsql-general(at)postgresql(dot)org
Subject: Re: [GENERAL] werid error"index row size 3040 exceeds btree maximum, 2712" occur randomly
Hi,
On 14.10.2013 05:47, Huang, Suya wrote:
> Hi,
>
> OK, first, I know the reason of this error "index row size 3040
> exceeds btree maximum, 2712" and know that we cannot create index on
> certain columns with size larger than 1/3 buffer page size.
>
> The question is, no matter if I deleted records that caused the
> problem or all records of the table, the error still occurred and
> would disappear after a while randomly, like 1 or 2 minutes or so.
I'd bet what you see is a caused by MVCC. The deleted records are not deleted immediately, but marked as deleted and then eventually freed by (auto)vacuum background process, once no other sessions need them.
But those records need to be indexed, as other sessions may still need to access them (depending on the transaction isolation level used), so we can't just skip them when creating the index.
See this for more details on this topic:
http://www.postgresql.org/docs/9.3/static/transaction-iso.html
Try running VACUUM on the table before creating the index, and make sure there are no other connections accessing the table. That should do the trick.
That being said, I wonder why you need to create a gin index on such long values. Any particular reason why you decided not to use a MD5 hash of the value, as suggested by the HINT message?
regards
Tomas
--
Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org) To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
From | Date | Subject | |
---|---|---|---|
Next Message | Chris | 2013-10-15 04:46:15 | recursive query returning extra rows in 8.4 |
Previous Message | Kevin Grittner | 2013-10-15 00:29:07 | Re: Postgresql 9.0.13 core dump |