From: | Torsten Förtsch <torsten(dot)foertsch(at)gmx(dot)net> |
---|---|
To: | Andrew Sullivan <ajs(at)crankycanuck(dot)ca>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: check database integrity |
Date: | 2014-07-21 04:29:16 |
Message-ID: | 53CC971C.8070709@gmx.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 20/07/14 16:02, Andrew Sullivan wrote:
>> Then I could also use it in production. But currently I
>> > need it only to verify a backup.
> If you need to verify a backup, why isn't pg_dump acceptable? Or is
> it that you are somehow trying to prove that what you have on the
> target (backup) machine is in fact production-ready? I guess I don't
> really understand what you are trying to do.
Sorry, for kind-of misusing the word backup. What I am doing is this. I
took a base backup and replayed a few xlogs. This is what I meant with
"backup".
What I want to verify is whether all pages in all files match their
checksums. So, I have to make postgres read all pages at least once.
Pg_dump does this for normal tables and toast. But it does not read
index relations as far as I know. A
select count(*)
from all tables would also do the job, again without indexes.
The sentence about the backup was only to point out that I don't really
care if the query locks the database for concurrent transactions. But
better if it would not acquire an exclusive lock on all tables.
Torsten
From | Date | Subject | |
---|---|---|---|
Next Message | Xiaoyulei | 2014-07-21 06:38:11 | question about dynahash |
Previous Message | Torsten Förtsch | 2014-07-21 04:08:33 | Re: check database integrity |