From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au> |
Cc: | Kouber Saparev <postgresql(at)saparev(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Help me recovering data |
Date: | 2005-02-14 19:40:34 |
Message-ID: | 535.1108410034@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au> writes:
> This might seem like a stupid question, but since this is a massive data
> loss potential in PostgreSQL, what's so hard about having the
> checkpointer or something check the transaction counter when it runs and
> either issue a db-wide vacuum if it's about to wrap, or simply
> disallow any new transactions?
The checkpointer is entirely incapable of either detecting the problem
(it doesn't have enough infrastructure to examine pg_database in a
reasonable way) or preventing backends from doing anything if it did
know there was a problem.
> I think people'd rather their db just stopped accepting new transactions
> rather than just losing data...
Not being able to issue new transactions *is* data loss --- how are you
going to get the system out of that state?
autovacuum is the correct long-term solution to this, not some kind of
automatic hara-kiri.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | pgsql | 2005-02-14 20:54:48 | Re: enforcing a plan (in brief) |
Previous Message | Tom Lane | 2005-02-14 19:30:37 | Re: Schema name of function |