From: | "Andrey M(dot) Borodin" <x4mmm(at)yandex-team(dot)ru> |
---|---|
To: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
Cc: | Dilip Kumar <dilipbalaut(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Allow ERROR from heap_prepare_freeze_tuple to be downgraded to WARNING |
Date: | 2020-07-21 02:53:59 |
Message-ID: | 82CBB5F5-F32A-4FA7-895A-9051F45F6894@yandex-team.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> 21 июля 2020 г., в 00:36, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> написал(а):
>
>
>> FWIW we coped with this by actively monitoring this kind of corruption
>> with this amcheck patch [0]. One can observe this lost page updates
>> cheaply in indexes and act on first sight of corruption: identify
>> source of the buggy behaviour.
>
> Right.
>
> I wish we had some way to better protect against this kind of problems,
> but I don't have any ideas. Some things can be protected against with
> checksums, but if you just lose a write, there's nothing to indicate
> that. We don't have a per-page write counter, or a central repository
> of per-page LSNs or checksums, and it seems very expensive to maintain
> such things.
If we had data checksums in another fork we could flush them on checkpoint.
This checksums could protect from lost page update.
And it would be much easier to maintain these checksums for SLRUs.
Best regards, Andrey Borodin.
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2020-07-21 03:18:16 | Re: Parallel Seq Scan vs kernel read ahead |
Previous Message | k.jamison@fujitsu.com | 2020-07-21 02:36:14 | RE: Parallel Seq Scan vs kernel read ahead |