From: | Dimitri Fontaine <dfontaine(at)hi-media(dot)com> |
---|---|
To: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
Cc: | Simon Riggs <simon(at)2ndQuadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: PANIC: corrupted item lengths |
Date: | 2009-06-04 17:56:17 |
Message-ID: | 5D74113E-4FFF-4B18-A141-B16E46717128@hi-media.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
Le 4 juin 09 à 15:55, Alvaro Herrera a écrit :
> I tend to hate automatic zeroing of pages because there's no way to
> get
> the contents later for forensics. I would support your proposal if we
> had a way of saving the block elsewhere before zeroing it (say
> create a
> directory corrupted+zeroed similar to lost+found in the database dir
> and
> save it annotated with the OID of the table and the block number).
What about creating a special purpose fork for this? It could contain
some metadata plus the original (maybe corrupted) block content.
> The main problem I see with this is that if you don't immediately
> act to
> examine the data, some of the pg_clog files that you need to be able
> to
> read these files may be gone.
The necessary clogs maybe could be part of the special fork metadata
associated with each saved apart corrupted blocks?
Regards,
--
dim
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Dunstan | 2009-06-04 17:57:00 | Re: [COMMITTERS] pgsql: Initialise perl library as documented in perl API. |
Previous Message | Kevin Grittner | 2009-06-04 17:39:59 | Re: 8.4b2 tsearch2 strange error |