From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | mlw <pgsql(at)mohawksoft(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Recovery tools |
Date: | 2003-04-14 14:26:06 |
Message-ID: | 7609.1050330366@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
mlw <pgsql(at)mohawksoft(dot)com> writes:
> Just suppose that all the log files are gone, and the only thing left is
> some of the files in the /data directory. Is there any way to scan this
> data and dump it to a file which could subsequently be used with a "COPY
> FROM STDIN" on a new database?
There aren't separate tools, and I'm not sure there could or should be.
What I'd do in that situation is:
* pg_resetxlog to get a minimally valid xlog
* if clog is missing, gin up files containing 0x55 everywhere
(to make it look like every transaction has committed --- or
put 00 everywhere if you'd rather assume that recent
transactions all aborted)
* start postmaster, look around, fix problems until I can pg_dump.
AFAICS, you can make tools that work at the page/item level (like
pg_filedump, see http://sources.redhat.com/rhdb/) but there is hardly
any scope for doing anything intermediate between that and a full
postmaster. There's no hope of decoding the contents of a tuple without
access to the table's tuple descriptor, which means you need most of the
system catalog mechanisms; plus you'd need the I/O routines for the
datatypes involved. Might as well just use the postmaster as your data
inspection tool.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Ned Lilly | 2003-04-14 16:09:00 | MySQL and RHDB news; 8.0 troll |
Previous Message | mlw | 2003-04-14 12:30:56 | Recovery tools |