From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Teodor Sigaev <teodor(at)sigaev(dot)ru> |
Cc: | APseudoUtopia <apseudoutopia(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org, Oleg Bartunov <oleg(at)sai(dot)msu(dot)su> |
Subject: | Re: Vacuumdb Fails: Huge Tuple |
Date: | 2009-10-02 21:23:35 |
Message-ID: | 14606.1254518615@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Teodor Sigaev <teodor(at)sigaev(dot)ru> writes:
> ginHeapTupleFastCollect and ginEntryInsert checked tuple's size for
> TOAST_INDEX_TARGET, but ginHeapTupleFastCollect checks without one ItemPointer,
> as ginEntryInsert does it. So ginHeapTupleFastCollect could produce a tuple
> which 6-bytes larger than allowed by ginEntryInsert. ginEntryInsert is called
> during pending list cleanup.
I applied this patch after improving the error reporting a bit --- but
I was unable to get the unpatched code to fail in vacuum as the OP
reported was happening for him. It looks to me like the original coding
limits the tuple size to TOAST_INDEX_TARGET (512 bytes) during
collection, but checks only the much larger GinMaxItemSize limit during
final insertion. So while this is a good cleanup, I am suspicious that
it may not actually explain the trouble report.
I notice that the complaint was about a VACUUM FULL not a plain VACUUM,
which means that the vacuum would have been moving tuples around and
hence inserting brand new index entries. Is there any possible way that
we could extract a larger index tuple from a moved row than we had
extracted from the original version?
It would be nice to see an actual test case that makes 8.4 fail this way
...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tim Landscheidt | 2009-10-02 21:37:20 | Re: Procedure for feature requests? |
Previous Message | Scott Marlowe | 2009-10-02 21:16:19 | Re: Limit of bgwriter_lru_maxpages of max. 1000? |