From: | Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Kouber Saparev <postgresql(at)saparev(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Help me recovering data |
Date: | 2005-02-14 17:47:10 |
Message-ID: | 4210E41E.5060106@familyhealth.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> I think you're pretty well screwed as far as getting it *all* back goes,
> but you could use pg_resetxlog to back up the NextXID counter enough to
> make your tables and databases reappear (and thereby lose the effects of
> however many recent transactions you back up over).
>
> Once you've found a NextXID setting you like, I'd suggest an immediate
> pg_dumpall/initdb/reload to make sure you have a consistent set of data.
> Don't VACUUM, or indeed modify the DB at all, until you have gotten a
> satisfactory dump.
>
> Then put in a cron job to do periodic vacuuming ;-)
This might seem like a stupid question, but since this is a massive data
loss potential in PostgreSQL, what's so hard about having the
checkpointer or something check the transaction counter when it runs and
either issue a db-wide vacuum if it's about to wrap, or simply
disallow any new transactions?
I think people'd rather their db just stopped accepting new transactions
rather than just losing data...
Chris
From | Date | Subject | |
---|---|---|---|
Next Message | Ron Mayer | 2005-02-14 17:55:38 | Re: Query optimizer 8.0.1 (and 8.0) |
Previous Message | Kouber Saparev | 2005-02-14 17:37:36 | Re: Help me recovering data |