Aw: Re: pg_dump include/exclude data, was: verify checksums / CREATE DATABASE

From: "Karsten Hilbert" <Karsten(dot)Hilbert(at)gmx(dot)net>
To: "Adrian Klaver" <adrian(dot)klaver(at)aklaver(dot)com>
Cc: pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Aw: Re: pg_dump include/exclude data, was: verify checksums / CREATE DATABASE
Date: 2019-06-11 18:15:34
Message-ID: trinity-09990bfd-ae66-4c01-b9bb-cafb31a57e9f-1560276934383@3c-app-gmx-bs19
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> > The problem I hope to protect against with this approach: the
> > CREATE DATABASE might untaint corrupted data from a bad disk
> > block into a good disk block virtue of doing a file level
> > copy.
> >
> > I hope my reasoning isn't going astray.
>
> As I understand it checksums are done on the page level using a hash(for
> details: https://doxygen.postgresql.org/checksum__impl_8h_source.html)
> I am not sure how a page could get un-corrupted by virtue of a file copy.

Ah, no, I did not explain myself well.

Let's assume a corrupted, bad (but readable at the hardware
level) disk block B. A filesystem level copy (as in CREATE
DATABASE) would successfully read that disk block B and
copy the corrupted content into a good disk block G elsewhere
on the disk. Verifying the checksum of the page sitting on
block B before doing the database cloning would
reveal the corruption before it got cloned.

Does that make sense ?

Karsten

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Andreas Kretschmer 2019-06-11 18:25:17 Re: Featured Big Name Users of Postgres
Previous Message Igal Sapir 2019-06-11 17:45:27 Featured Big Name Users of Postgres