From: | Ants Aasma <ants(dot)aasma(at)cybertec(dot)at> |
---|---|
To: | Nathan Bossart <nathandbossart(at)gmail(dot)com> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, John Naylor <johncnaylorls(at)gmail(dot)com>, Bruce Momjian <bruce(at)momjian(dot)us>, Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, "Shankaran, Akash" <akash(dot)shankaran(at)intel(dot)com>, "Devulapalli, Raghuveer" <raghuveer(dot)devulapalli(at)intel(dot)com> |
Subject: | Re: Proposal for Updating CRC32C with AVX-512 Algorithm. |
Date: | 2024-12-13 13:12:44 |
Message-ID: | CANwKhkOAeUa8=xevi=Vzdk+O48iSnMmfqPZ0b+ZVmc4+bFuRmQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, 13 Dec 2024 at 00:14, Nathan Bossart <nathandbossart(at)gmail(dot)com> wrote:
>
> On Thu, Dec 12, 2024 at 10:45:29AM -0500, Andres Freund wrote:
> > Frankly, we should just move away from using CRCs. They're good for cases
> > where short runs of bit flips are much more likely than other kinds of errors
> > and where the amount of data covered by them has a low upper bound. That's not
> > at all the case for WAL records. It'd not matter too much if CRCs were cheap
> > to compute - but they aren't. We should instead move to some more generic
> > hashing algorithm, decent ones are much faster.
>
> Upthread [0], I wondered aloud about trying to reuse the page checksum code
> for this. IIRC there was a lot of focus on performance when that was
> added, and IME it catches problems decently well.
>
> [0] https://postgr.es/m/ZrUcX2kq-0doNBea%40nathan
It was carefully built to allow compiler auto-vectorization for power
of 2 block sizes to run fast on any CPU that has fast vectorized 32
bit multiplication instructions.
Performance is great, if compiled with -march=native it gets 15.8
bytes/cycle on Zen 3. Compared to 19.5 for t1ha0_aes_avx2, 7.9 for
aes-ni hash, and 2.15 for fasthash32. However, it isn't particularly
good for small (<1K) blocks both for hash quality and performance
reasons.
One idea would be to use fasthash for short lengths and an extended
version of the page checksum for larger values. But before committing
to that approach, I think revisiting the quality of the page checksum
algorithm is due. Quality and robustness were not the highest
priorities when developing it.
--
Ants Aasma
Lead Database Consultant
www.cybertec-postgresql.com
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2024-12-13 14:20:10 | Re: Recovering from detoast-related catcache invalidations |
Previous Message | Ashutosh Bapat | 2024-12-13 11:53:34 | Re: Allow subfield references without parentheses |