From: | Jeff Davis <pgsql(at)j-davis(dot)com> |
---|---|
To: | Erik Jones <ejones(at)engineyard(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Recovering data from table show corruption with "invalid page header in block X" |
Date: | 2010-02-10 01:27:46 |
Message-ID: | 1265765266.17112.67.camel@monkey-cat.sm.truviso.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, 2010-02-09 at 17:14 -0800, Erik Jones wrote:
> Anyways, I realized that the dump run with zero_damaged_pages does
> actually finish.
Yeah, it should finish, it's just a question of whether the warnings
continue, and if you need to keep zero_damaged_pages on to keep reading.
> Also, I found that I can actually select all of the data by doing
> per-day queries to cause data access to be done via index scans since
> there is a date column indexed; I'm guessing that's because that
> avoids having to read the data pages' headers?
Hmm... I don't think that will actually avoid the issue. My guess is
that those pages happened to be cached from an earlier read with
zero_damaged_pages on. An index scan does not use the ring buffer I was
talking about, so the pages are more likely to stay in cache much
longer. I believe that's what's happening, and the issue is just more
hidden than before.
Regards,
Jeff Davis
From | Date | Subject | |
---|---|---|---|
Next Message | BillR | 2010-02-10 03:06:19 | Re: Best way to handle multi-billion row read-only table? |
Previous Message | Erik Jones | 2010-02-10 01:14:30 | Re: Recovering data from table show corruption with "invalid page header in block X" |