From: | Francisco Reyes <lists(at)stringsutils(dot)com> |
---|---|
To: | PostgreSQL general <pgsql-general(at)postgresql(dot)org> |
Subject: | Corrupted DB? could not open file pg_clog/#### |
Date: | 2006-07-30 05:31:14 |
Message-ID: | cone.1154237474.238596.23104.1000@zoraida.natserv.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Looking at archives seem to indicate missing pg_clog files is some form
of row or page corruption.
In an old thread from back in 2003 Tom Lane recommended
(http://tinyurl.com/jushf)
>>If you want to try to narrow down where the corruption is, you can
>>experiment with commands like
>>select ctid,* from big_table offset N limit 1;
Is that still a valid suggestion?
How do I know the possible maximun value for offset to try for each table?
If I have logs turned on.. at which level will the eror show? I am only
aware of the problem, because an application connected to postgrseql had the
errors in it's logs, but not seeing anything in the postgresql logs
themselves.
Just tried a pg_dump and got the
could not open file "pg_clog/0000"
error.
The file pg_clog/0000 is missing.
Looking at another thread (http://tinyurl.com/feyye) I see that the file can
be created as 256K worth of zeroes. If I do this.. will operations resume
normally? Is there a way to tell if any data was lost?
From | Date | Subject | |
---|---|---|---|
Next Message | Andreas Kretschmer | 2006-07-30 05:34:25 | Re: create function syntax error |
Previous Message | Christopher Browne | 2006-07-30 03:13:43 | Re: Performance of the listen command |