Kevin Grittner pisze:
> Ireneusz Pluta <ipluta(at)wp(dot)pl> wrote:
>
>
>> Is there
>> any ready tool, which, for instance when given a path to database
>> cluster, would traverse all cluster directories and files and
>> check all page headers? I probably answered myself - manual VACUUM
>> [VERBOSE] would do - but it fails when finds the first invalid
>> header, so I would have. That's why I am looking for something
>> only scanning and reporting invalid headers.
>>
>
> I don't know of anything, but you might try searching pgfoundry.
>
Continuing revisiting my old thread, and to answer myself:
find and pg_filedump seem to be the way, more-less like:
find $PGDATA -type f -name [0-9]* | while read f; do echo $f &&
pg_filedump $f | grep -i invalid; done
# not tested yet
- of course keeping in mind that newly allocated pages might appear
invalid to pg_filedump, while the server is running, as Tom explained
just recently:
> A newly-added page on disk will
> be initially filled with zeroes, which I think pg_filedump will complain
> about. It won't get overwritten with "valid" data until the page is
> next written, either because of a checkpoint or because the buffer space
> is needed for another page. pg_filedump can't see the state of the page
> within the server's buffers, which is what counts here.