pg_dump 8.3.3 ERROR: invalid page header in block 2264419 of relation "pg_largeobject"

From: David Wall <d(dot)wall(at)computer(dot)org>
To: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: pg_dump 8.3.3 ERROR: invalid page header in block 2264419 of relation "pg_largeobject"
Date: 2017-05-24 23:02:14
Message-ID: 5f405dbd-d8d0-e284-166d-1b0c0946d5a6@computer.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

We have not noted any issues, but when I ran a pg_dump on an 8.3.3
database, it failed after an hour or so with the error:

ERROR: invalid page header in block 2264419 of relation "pg_largeobject"
pg_dump: The command was: FETCH 1000 IN bloboid

As we seem to have some data corruption issue, the question is how can I
either fix this, or have pg_dump ignore it and continue doing the best
dump it can? That is, I'd like to create a new clean database that has
whatever data I can recover.

Because the large objects are mostly for storing uploaded files (that
have been encrypted, so the DB contents will likely be meaningless), if
we are missing any, it's not too bad, well, no less bad than whatever we
have now.

Thanks,
David

The OS it is running on shows:

cat /proc/version
Linux version 2.6.18-92.1.10.el5.xs5.0.0.39xen (root(at)pondo-2) (gcc
version 4.1.2 20071124 (Red Hat 4.1.2-42)) #1 SMP Thu Aug 7 14:58:14 EDT
2008

uname -a
Linux example.com 2.6.18-92.1.10.el5.xs5.0.0.39xen #1 SMP Thu Aug 7
14:58:14 EDT 2008 i686 i686 i386 GNU/Linux

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Adrian Klaver 2017-05-24 23:06:05 Re: logical replication in PG10 BETA
Previous Message Sam Saffron 2017-05-24 22:33:03 Re: Why is posgres picking a suboptimal plan for this query?