From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Andrew Sullivan <andrew(at)libertyrms(dot)info> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Detecting corrupted pages earlier |
Date: | 2003-04-03 04:10:13 |
Message-ID: | 25461.1049343013@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Andrew Sullivan <andrew(at)libertyrms(dot)info> writes:
> You know you have big-trouble, oh-no, ISP ran over
> the tapes while they were busy pitching magnets through your cage,
> data corruption problems, and this is your best hope for recovery?
> Great. Log in, turn on this option, and start working. But across
> every back end? It's the doomsday device for databases.
Yeah, it is. Actually, the big problem with it in my mind is this
scenario: you get a page-header-corrupted error on page X, you
investigate and decide there's no hope for page X, so you turn on
zero_damaged_pages and try to dump the table. It comes to page X,
complains, zeroes it, proceeds, ... and then comes to damaged page Y,
complains, zeroes it, proceeds. Maybe you didn't know page Y had
problems. Maybe you could have gotten something useful out of page Y
if you'd looked first. Too late now.
What I'd really prefer to see is not a ZERO_DAMAGED_PAGES setting,
but an explicit command to "DESTROY PAGE n OF TABLE foo". That would
make you manually admit defeat for each individual page before it'd
drop data. But I don't presently have time to implement such a command
(any volunteers out there?). Also, I could see where try-to-dump, fail,
DESTROY, try again, lather, rinse, repeat, could get pretty tedious on a
badly damaged table.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2003-04-03 04:36:00 | Re: Detecting corrupted pages earlier |
Previous Message | Tom Lane | 2003-04-03 03:39:57 | Re: contrib and licensing |