From: | "Bryan White" <bryan(at)arcamax(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "pgsql-general" <pgsql-general(at)postgreSQL(dot)org> |
Subject: | Re: Corrupted Table |
Date: | 2000-07-31 20:12:56 |
Message-ID: | 021601bffb2b$b78b7a00$2dd260d1@arcamax.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> Hmm. Assuming that it is a corrupted-data issue, the only likely
> failure spot that I see in CopyTo() is the heap_getattr macro.
> A plausible theory is that the length word of a variable-length field
> (eg, text column) has gotten corrupted, so that when the code tries to
> access the next field beyond that, it calculates a pointer off the end
> of memory.
>
> You will probably find that plain SELECT will die too if it tries to
> extract data from the corrupted tuple or tuples. With judicious use of
> SELECT last-column ... LIMIT you might be able to narrow down which
> tuples are bad, and then dump out the disk block containing them (use
> the 'tid' pseudo-attribute to see which block a tuple is in). I'm not
> sure if the exercise will lead to anything useful or not, but if you
> want to pursue it...
I am wiling to spend some time to track this down. However I would prefer
to not keep crashing my live database. I would like to copy the raw data
files to a backup maching. Are there any catches in doing this. This
particular table is only updated at predictable times on the live system. I
am guessing as long as it is stable for at least a few minutes before I copy
the file it will work.
How hard would it be to write a utility that would walk a table looking this
kind of corruption? Are the on-disk data formats documented anywhere?
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2000-07-31 20:27:44 | Re: Corrupted Table |
Previous Message | Tom Lane | 2000-07-31 20:10:55 | Re: Speed difference between != and = operators? |