From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, Peter Geoghegan <pg(at)bowt(dot)ie>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: better page-level checksums |
Date: | 2022-06-10 16:08:22 |
Message-ID: | 20220610160822.GS9030@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Greetings,
* Robert Haas (robertmhaas(at)gmail(dot)com) wrote:
> On Fri, Jun 10, 2022 at 9:36 AM Peter Eisentraut
> <peter(dot)eisentraut(at)enterprisedb(dot)com> wrote:
> > I think there ought to be a bit more principled analysis here than just
> > "let's add a lot more bits". There is probably some kind of information
> > to be had about how many CRC bits are useful for a given block size, say.
> >
> > And then there is the question of performance. When data checksum were
> > first added, there was a lot of concern about that. CRC is usually
> > baked directly into hardware, so it's about as cheap as we can hope for.
> > SHA not so much.
>
> That's all pretty fair. I have to admit that SHA checksums sound quite
> expensive, and also that I'm no expert on what kinds of checksums
> would be best for this sort of application. Based on the earlier
> discussions around TDE, I do think that people want tamper-resistant
> checksums here too -- like maybe something where you can't recompute
> the checksum without access to some secret. I could propose naive ways
> to do that, like prepending a fixed chunk of secret bytes to the
> beginning of every block and then running SHA512 or something over the
> result, but I'm sure that people with actual knowledge of cryptography
> have developed much better and more robust ways of doing this sort of
> thing.
So, it's not quite as simple as use X or use Y, we need to be
considering the use case too. In particular, the amount of data that's
being hash'd is relevant when it comes to making a decision about what
hash or checksum to use. When you're talking about (potentially) 1G
segment files, you'll want to use something different (like SHA) vs.
when you're talking about an 8K block (not that you couldn't use SHA,
but it may very well be overkill for it).
In terms of TDE, that's yet a different use-case and you'd want to use
AE (authenticated encryption) + AAD (additional authenticated data) and
the result of that operation is a block which has some amount of
unencrypted data (eg: LSN, potentially used as the IV), some amount of
encrypted data (eg: everything else), and then space to store the tag
(which can be thought of, but is *distinct* from, a hash of the
encrypted data + the additional unencrypted data, where the latter would
include the unencrypted data on the block, like the LSN, plus other
information that we want to include like the qualified path+filename of
the file as relevant to the PGDATA root). If our goal is
cryptographiclly authenticated and encrypted data pages (which I believe
is at least one of our goals) then we're talking about encryption
methods like AES GCM which handle production of the tag for us and with
that tag we would *not* need to have any independent hash or checksum for
the block (though we could, but that should really be included in the
*encrypted* section, as hashing unencrypted data and then storing that
hash unencrypted could potentially leak information that we'd rather
not).
Note that NIST has put out information regarding how big a tag is
appropriate for how much data is being encrypted with a given
authenticated encryption method such as AES GCM. I recall Robert
finding similar information for hashing/checksumming of unencrypted
data from a similar source and that'd make sense to consider when
talking about *just* adding a hash/checksum for unencrypted data blocks.
This is the relevant discussion from NIST on this subject:
https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-38d.pdf
Note particularly Appendix C: Requirements and Guidelines for Using
Short Tags (though, really, the whole thing is good to read..).
> I've really been devoting most of my mental energy here to
> understanding what problems there are at the PostgreSQL level - i.e.
> when we carve out bytes for a wider checksum, what breaks? The only
> research that I did to try to understand what algorithms might make
> sense was a quick Google search, which led me to the list of
> algorithms that btrfs uses. I figured that was a good starting point
> because, like a filesystem, we're encrypting fixed-size blocks of
> data. However, I didn't intend to present the results of that quick
> look as the definitive answer to the question of what might make sense
> for PostgreSQL, and would be interested in hearing what you or anyone
> else thinks about that.
In the thread about checksum/hashes for the backup manifest, I was
pretty sure you found some information regarding the amount of data
being hashed vs. the size you want the hash/checksum to be and that
seems like it'd be particularly relevant for this discussion (as it was
for backups, at least as I recall..). Hopefully we can go find that.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2022-06-10 16:11:03 | Re: Sharing DSA pointer between parallel workers after they've been created |
Previous Message | Hsu, John | 2022-06-10 15:12:16 | Re: A proposal to force-drop replication slots to make disabling async/sync standbys or logical replication faster in production environments |