From: | Michael Paquier <michael(at)paquier(dot)xyz> |
---|---|
To: | Michael Banck <michael(dot)banck(at)credativ(dot)de> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Asif Rehman <asifr(dot)rehman(at)gmail(dot)com>, pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: Online verification of checksums |
Date: | 2020-11-04 08:48:41 |
Message-ID: | 20201104084841.GF1711@paquier.xyz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Oct 30, 2020 at 11:30:28AM +0900, Michael Paquier wrote:
> Playing with dd and generating random pages, this detects random
> corruptions, making use of a wait/retry loop if a failure is detected.
> As mentioned upthread, this is a double-edged sword, increasing the
> number of retries reduces the changes of false positives, at the cost
> of making regression tests longer. This stuff uses up to 5 retries
> with 100ms of sleep for each page. (I am aware of the fact that the
> commit message of the main patch is not written yet).
So, I have done much more testing of this patch using an instance with
a small shared buffer pool and pgbench running in parallel for having
a large eviction rate, and I cannot convince myself to do that. My
laptop got easily constrained on I/O, and within a total of 2000 base
backups or so, I have seen some 5 backup failures with a correct
detection logic. The rate is low here, but that could be annoying for
users even at 1~2%. Couldn't we take the different approach to remove
this feature instead? This still requires the grammar to be present
in back-branches, but as things stand, we have a feature that fails
its promise, and that also eats for nothing resources for each base
backup taken :/
--
Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Laurenz Albe | 2020-11-04 08:52:41 | Re: Collation versioning |
Previous Message | Kyotaro Horiguchi | 2020-11-04 08:39:10 | Re: shared-memory based stats collector |