From: | Christopher Quinn <cq(at)htec(dot)demon(dot)co(dot)uk> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | fault tolerance... |
Date: | 2002-03-19 09:49:24 |
Message-ID: | 3C9709A4.2070005@htec.demon.co.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hello,
I've been wondering how pgsql goes about guaranteeing data
integrity in the face of soft failures. In particular
whether it uses an alternative to the double root block
technique - which is writing, as a final indication of the
validity of new log records, to alternate disk blocks at
fixed disk locations some meta information including the
location of the last log record written.
This is the only technique I know of - does pgsql use
something analogous?
Also, I note from the developer docs the comment on cacheing
disk drives: can anyone supply a reference on this subject
(I have been on the lookout for a long time without success)
and perhaps more generally on the subject of what exactly
can go wrong with a disk write when struck by power failure.
Lastly, is there any form of integrity checking on disk
block level data? I have vague recollections of seeing
mention of crc/xor in relation to Oracle or DB2.
Whether or not pgsql uses any such scheme I am curious to
know a rationale for its use - it makes me wonder about
what, if anything, can be relied on 100%!
Thanks,
Chris Quinn
From | Date | Subject | |
---|---|---|---|
Next Message | D'Arcy J.M. Cain | 2002-03-19 09:55:36 | Re: Time for 7.2.1? |
Previous Message | Jean-Michel POURE | 2002-03-19 09:00:39 | Re: Platform comparison ... |