From: | Francisco Reyes <lists(at)stringsutils(dot)com> |
---|---|
To: | Martijn van Oosterhout <kleptog(at)svana(dot)org> |
Cc: | PostgreSQL general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Corrupted DB? could not open file pg_clog/#### |
Date: | 2006-07-31 22:09:33 |
Message-ID: | cone.1154383773.627041.69955.5001@35st-server.simplicato.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Martijn van Oosterhout writes:
> That's when you've reached the end of the table. The point is that
> before then you'll have found the value of N that produces the error.
Will be a while.. my little python script is doing under 10 selects/sec...
and there are nearly 67 million records. :-(
>> How?
>> Tried set client_min_message='DEBUG';
>
> That should do it.
The right one (for the archives) was actually:
set client_min_messages=DEBUG;
> It will rollback all pending transactions. The point is that it's
> looking for information about transactions that were committed. This is
> usually a memory or disk error.
So, should it be safe to create the file and fill it up with 256K zeros?
> Sounds like some corrupt data. Once you've located the invalid data,
> dump the block with pgfiledump, that should give you more info about
> what happened.
At the rate my script is going.. it's going to take a very long time to
find out where the problem is. If I have a dump.. any usefull info I can
take from the point the dump stopped?
From | Date | Subject | |
---|---|---|---|
Next Message | Wayne Conrad | 2006-07-31 22:11:08 | Re: pg_xlog not cleaned up |
Previous Message | Wayne Conrad | 2006-07-31 22:03:14 | Re: pg_xlog not cleaned up |