From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Greg Stark <gsstark(at)mit(dot)edu> |
Cc: | "Mark Cave-Ayland" <m(dot)cave-ayland(at)webbased(dot)co(dot)uk>, "'Manfred Koizar'" <mkoi-pg(at)aon(dot)at>, "'Bruce Momjian'" <pgman(at)candle(dot)pha(dot)pa(dot)us>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Cost of XLogInsert CRC calculations |
Date: | 2005-05-31 15:19:02 |
Message-ID: | 12480.1117552742@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Greg Stark <gsstark(at)mit(dot)edu> writes:
> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:
>> It's not really a matter of backstopping the hardware's error detection;
>> if we were trying to do that, we'd keep a CRC for every data page, which
>> we don't. The real reason for the WAL CRCs is as a reliable method of
>> identifying the end of WAL: when the "next record" doesn't checksum you
>> know it's bogus.
> Is the random WAL data really the concern? It seems like a more reliable way
> of dealing with that would be to just accompany every WAL entry with a
> sequential id and stop when the next id isn't the correct one.
We do that, too (the xl_prev links and page header addresses serve that
purpose). But it's not sufficient given that WAL records can span pages
and therefore may be incompletely written.
> The only truly reliable way to handle this would require two fsyncs per
> transaction commit which would be really unfortunate.
How are two fsyncs going to be better than one?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2005-05-31 15:42:54 | Re: ddl triggers |
Previous Message | Tom Lane | 2005-05-31 15:13:42 | Re: ddl triggers |