"Tuple too big" when the tuple is not that big...

From: Paulo Jan <admin(at)digital(dot)ddnet(dot)es>
To: pgsql-general(at)postgresql(dot)org
Subject: "Tuple too big" when the tuple is not that big...
Date: 2001-04-04 17:11:01
Message-ID: 3ACB55A5.D22B2D2B@digital.ddnet.es
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi all:

I have a problem here, using Postgres 6.5.3 on a Red Hat Linux 6.0. I
have a table where, each time I do a "vacuum analyze", the database
complains saying "ERROR: Tuple is too big: size 10460"... and the
problem is that there isn't any record as far as I know that goes beyond
the 8K limit.
Some background: the table in question was initially created with a
"text" field, and it gave us endless problems (crashes, coredumps,
etc.). After searching the archives and finding a number of people
warning against using the "text" field (specially in the 6.x series), I
dumped the table contents (with COPY) and recreated it using
"varchar(8088)" instead. When importing the data back Postgres didn't
say anything, and I assume that if there had been any field bigger than
8K it would have complained. BUT... right after importing the data in
the brand new table, I try a "vacuum analyze" again and it does the same
thing.
Some other facts:

-"Vacuum" works fine. It's just "vacuum analyze" what gives problems.
-The table doesn't have any indices.
-Everytime I try to do a "\d (table)", Postgres dumps core with the
"backend closed the channel unexpectedly".

Any ideas? (Aside of upgrading to 7.x; we can't do that for now). Do
you need any other information?

Paulo Jan.
DDnet.

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Ciaran Johnston 2001-04-04 17:37:45 Problems configuring pgsql - sed problem
Previous Message Alexander Lohse 2001-04-04 16:44:03 Re: another stupid question