Re: Somewhat automated method of cleaning table of corrupt records for pg_dump

From: Heiko Wundram <modelnine(at)modelnine(dot)org>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Somewhat automated method of cleaning table of corrupt records for pg_dump
Date: 2012-10-22 09:54:47
Message-ID: 508517E7.2010306@modelnine.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Am 22.10.2012 09:05, schrieb Craig Ringer:
> Working strictly with a *copy*, does REINDEXing then CLUSTERing the
> tables help? VACCUM FULL on 8.3 won't rebuild indexes, so if index
> damage is the culprit a reindex may help. Then, if CLUSTER is able to
> rewrite the tables in index order you might be able to recover.

REINDEXing all indexes works, CLUSTERing all tables on the primary key
(index) after rebuilding them also works, but the dump still displays
the same error as before (i.e., out of memory while dumping a specific
table). I guess that the broken record(s) in that table are either not
moved, or their data portion isn't adapted to the respective index.
Which means that besides walking all table rows (and deleting those by
OID that contain error[s]), there's not much that I can do, I guess.

If there's any other possibility of "out of the box" recovery - except
writing myself a small script to walk all rows - I'd still be grateful
for a hint.

Thanks for the hint at CLUSTER and REINDEX, anyway, which I've never
properly used before - this gave me an incentive to dig in the
documentation.

--
--- Heiko.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Hannes Erven 2012-10-22 11:16:50 Revert TRUNCATE CASCADE?
Previous Message Albe Laurenz 2012-10-22 09:49:18 Re: oracle_fdw