From: | Bruce Momjian <bruce(at)momjian(dot)us> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Joe Conway <mail(at)joeconway(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Antonin Houska <ah(at)cybertec(dot)at>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, "Moon, Insung" <Moon_Insung_i3(at)lab(dot)ntt(dot)co(dot)jp>, Ibrar Ahmed <ibrar(dot)ahmad(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [Proposal] Table-level Transparent Data Encryption (TDE) and Key Management Service (KMS) |
Date: | 2019-07-08 23:59:11 |
Message-ID: | 20190708235911.k6t6lvzgkh7gmn5x@momjian.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Jul 8, 2019 at 07:27:12PM -0400, Stephen Frost wrote:
> * Bruce Momjian (bruce(at)momjian(dot)us) wrote:
> > Operationally, how would that work? We unlock them all on boot but
> > somehow make them inaccessible to some backends after that?
>
> That could work and doesn't seem like an insurmountable challenge. The
> way that's been discussed, at least somewhere in the past, is leveraging
> the exec backend framework to have the user-connected backends work in
> an independent space from the processes launched at startup.
Just do it in another cluster --- why bother with all that?
> > > > > > > amount of data you transmit over a given TLS connection increases
> > > > > > > though, the risk increases and it would be better to re-key. How much
> > > > > > > better? That depends a great deal on if someone is trying to mount an
> > > > > > > attack or not.
> > > > > >
> > > > > > Yep, we need to allow rekey.
> > > > >
> > > > > Supporting a way to rekey is definitely a good idea.
> > > >
> > > > It is a requirement, I think. We might have problem tracking exactly
> > > > what key _version_ each table (or 8k block), or WAL file are. :-(
> > > > Ideally we would allow only two active keys, and somehow mark each page
> > > > as using the odd or even key at a given time, or something strange.
> > > > (Yeah, hand waving here.)
> > >
> > > Well, that wouldn't be the ideal since it would limit us to some small
> > > number of GBs of data written, based on the earlier discussion, right?
> >
> > No, it is GB per secret-nonce combination.
>
> Hrmpf. I'm trying to follow the logic that draws this conclusion.
>
> As I understand it, the NIST recommendation is a 96-bit *random* nonce,
> and then there's also a recommendation to not encrypt more than 2^32
> messages- much less than the 96-bit random nonce, at least potentially
> because that limits the repeat-nonce risk to a very low probability.
>
> If the amount-you-can-encrypt is really per secret+nonce combination,
> then how do those recommendations make sense..? This is where I really
> think we should be reading through and understanding exactly what the
> NIST recommendations are and not just trying to follow through things on
> stackoverflow.
Yes, it needs more research.
> > > I'm not sure that I can see through to a system where we are rewriting
> > > tables that are out on disk every time we hit 60GB of data written.
> > >
> > > Or maybe I'm misunderstanding what you're suggesting here..?
> >
> > See above.
>
> How long would these keys be active for then in the system..? How much
> data would they potentially be used to encrypt? Strikes me as likely to
> be an awful lot...
I think we need to look at CTR vs GCM.
> > Uh, well, you would think so, but for some reason AES just doesn't allow
> > that kind of attack, unless you brute force it trying every key. The
> > nonce is only to prevent someone from detecting that two output
> > encryption pages contain the same contents originally.
>
> That's certainly interesting, but such a brute-force with every key
> would allow it, where, if you use a random nonce, then such an attack
> would have to start working only after having access to the data, and
> not be something that could be pre-computed.
Uh, the nonce is going to have to be unecrypted so it can be fed into
the crypto method, so it will be visible.
> > > and a recommendation by NIST certainly holds a lot of water, at least
> > > for me. They also have a recommendation regarding the amount of data to
> >
> > Agreed.
>
> This is just it though, at least from my perspective- we are saying "ok,
> well, we know people recommend using a random nonce, but that's hard, so
> we aren't going to do that because we don't think it's important for our
> application", but we aren't cryptographers. I liken this to whatever
> discussion lead to using the username as the salt for our md5
> authentication method- great intentions, but not complete understanding,
> leading to a less-than-desirable result.
>
> When it comes to this stuff, I don't think we really get to pick and
> choose what we follow and what we don't. If the recommendation from an
> authority says we should use random nonces, then we *really* need to
> listen and do that, because that authority is a bunch of cryptographers
> with a lot more experience and who have definitely spent a great deal
> more time thinking about this than we have.
>
> If there's a recommendation from such an authority that says we *don't*
> need to use a random nonce, great, I'm happy to go review that and agree
> with it, but discussions on stackoverflow or similar don't hold the same
> weight that a recommendation from NIST does.
Yes, we need to get some experts involved.
> > > > Well, in many modes the nonce is just a counter, but as stated above,
> > > > not all modes. I need to pull out my security books to remember for
> > > > which ones it is safe. (Frankly, it is a lot easier to use a random
> > > > nonce for WAL than 8k pages.)
> > >
> > > I do appreciate that, but given the recommendation that you can encrypt
> > > gigabytes before needing to change, I don't know that we really gain a
> > > lot by changing for every 8K page.
> >
> > Uh, well, if you don't do that, you need to use the contents of the
> > previous page for the next page, and I think we want to encrypt each 8k
> > page independenty of what was before it.
>
> I'm not sure that we really want to do this at the 8K level... I'll
> admit that I'm not completely sure *where* to draw that line then
> though.
Uh, if you want more than 8k you will need to have surrounding 8k pages
in shared buffers, which seems unworkable.
> > As far as I know, TDE was to prevent someone with file system access
> > from reading the data.
>
> This seems pretty questionable, doesn't it? Who gets access to a system
> without having some access to what's running at the same time? Perhaps
> if the drive is stolen out from under the running system, but then that
> could be protected against using filesystem-level encryption. If we're
> trying to mimic that, which by itself would be good, then wouldn't we
> want to do so with similar capabilities- that is, by having
> per-tablespace keys? Since that's what someone running with filesystem
> level encryption would have. Of course, if they don't mount all the
> filesystems they've got set up then they have problems, but that's their
> choice.
>
> In the end, having this bit of flexibility allows us to have the same
> level of options that someone using filesystem-level encryption would
> have, but it also starts us down the path to having something which
> would work against another attack vector where someone has control over
> a complete running backend.
Again, why not just use a different cluster?
--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2019-07-09 00:11:55 | Re: Increasing default value for effective_io_concurrency? |
Previous Message | Bruce Momjian | 2019-07-08 23:40:03 | Re: OpenSSL specific value under USE_SSL instead of USE_OPENSSL |