From: | Michael Guerin <guerin(at)rentec(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Database corruption. |
Date: | 2007-02-08 16:48:19 |
Message-ID: | 45CB5453.8080804@rentec.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Tom Lane wrote:
> Michael Guerin <guerin(at)rentec(dot)com> writes:
>
>>> Hmm, that makes it sound like a plain old data-corruption problem, ie,
>>> trashed xmin or xmax in some tuple header. Can you do a "select
>>> count(*)" from this table without getting the error?
>>>
>>>
>> no, select count(*) fails around 25 millions rows.
>>
>
> OK, so you should be able to narrow down the corrupted row(s) and zero
> them out, which'll at least let you get back the rest of the table.
> See past archives for the standard divide-and-conquer approach to this.
>
> regards, tom lane
>
Ok, so I'm trying to track down the rows now (big table slow queries :(
) How does one zero out a corrupt row, plain delete? I see references
for creating the missing pg_clog file but I don't believe that's what
you're suggesting..
-michael
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-02-08 16:49:56 | Re: Database corruption. |
Previous Message | Arindam | 2007-02-08 16:21:16 | Re: Data migration from version 7.2.1 to 8.1.5 |