From: | Jacob Champion <jchampion(at)timescale(dot)com> |
---|---|
To: | Andrey Chudnovsky <achudnovskij(at)gmail(dot)com> |
Cc: | mahendrakar s <mahendrakarforpg(at)gmail(dot)com>, "hlinnaka(at)iki(dot)fi" <hlinnaka(at)iki(dot)fi>, "michael(at)paquier(dot)xyz" <michael(at)paquier(dot)xyz>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, "smilingsamay(at)gmail(dot)com" <smilingsamay(at)gmail(dot)com> |
Subject: | Re: [PoC] Federated Authn/z with OAUTHBEARER |
Date: | 2022-11-29 21:12:21 |
Message-ID: | 7b8037b0-69fb-0f1d-5e0b-8001672db968@timescale.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 11/23/22 19:45, Andrey Chudnovsky wrote:
> Mostly terminology questions here. OAUTHBEARER SASL appears to be the
> spec about using OAUTH2 tokens for Authentication.
> While any OAUTH2 can generally work, we propose to specifically
> highlight that only OIDC providers can be supported, as we need the
> discovery document.
*If* you're using in-band discovery, yes. But I thought your use case
was explicitly tailored to out-of-band token retrieval:
> The client knows how to get a token for a particular principal
> and doesn't need any additional information other than human readable
> messages.
In that case, isn't OAuth sufficient? There's definitely a need to
document the distinction, but I don't think we have to require OIDC as
long as the client application makes up for the missing information.
(OAUTHBEARER makes the openid-configuration error member optional,
presumably for this reason.)
>> 0) The original hook proposal upthread, I thought, was about allowing
>> libpq's flow implementation to be switched out by the application. I
>> don't see that approach taken here. It's fine if that turned out to be a
>> bad idea, of course, but this patch doesn't seem to match what we were
>> talking about.
> We still plan to allow the client to pass the token. Which is a
> generic way to implement its own OAUTH flows.
Okay. But why push down the implementation into the server?
To illustrate what I mean, here's the architecture of my proposed patchset:
+-------+ +----------+
| | -------------- Empty Token ------------> | |
| libpq | <----- Error Result (w/ Discovery ) ---- | |
| | | |
| +--------+ +--------------+ | |
| | iddawc | <--- [ Flow ] ----> | Issuer/ | | Postgres |
| | | <-- Access Token -- | Authz Server | | |
| +--------+ +--------------+ | +-----------+
| | | | |
| | -------------- Access Token -----------> | > | Validator |
| | <---- Authorization Success/Failure ---- | < | |
| | | +-----------+
+-------+ +----------+
In this implementation, there's only one black box: the validator, which
is responsible for taking an access token from an untrusted client,
verifying that it was issued correctly for the Postgres service, and
either 1) determining whether the bearer is authorized to access the
database, or 2) determining the authenticated ID of the bearer so that
the HBA can decide whether they're authorized. (Or both.)
This approach is limited by the flows that we explicitly enable within
libpq and its OAuth implementation library. You mentioned that you
wanted to support other flows, including clients with out-of-band
knowledge, and I suggested:
> If you wanted to override [iddawc's]
> behavior as a client, you could replace the builtin flow with your
> own, by registering a set of callbacks.
In other words, the hooks would replace iddawc in the above diagram.
In my mind, something like this:
+-------+ +----------+
+------+ | ----------- Empty Token ------------> | Postgres |
| | < | <---------- Error Result ------------ | |
| Hook | | | +-----------+
| | | | | |
+------+ > | ------------ Access Token ----------> | > | Validator |
| | <--- Authorization Success/Failure -- | < | |
| libpq | | +-----------+
+-------+ +----------+
Now there's a second black box -- the client hook -- which takes an
OAUTHBEARER error result (which may or may not have OIDC discovery
information) and returns the access token. How it does this is
unspecified -- it'll probably use some OAuth 2.0 flow, but maybe not.
Maybe it sends the user to a web browser; maybe it uses some of the
magic provider-specific libraries you mentioned upthread. It might have
a refresh token cached so it doesn't have to involve the user at all.
Crucially, though, the two black boxes remain independent of each other.
They have well-defined inputs and outputs (the client hook could be
roughly described as "implement get_auth_token()"). Their correctness
can be independently verified against published OAuth specs and/or
provider documentation. And the client application still makes a single
call to PQconnect*().
Compare this to the architecture proposed by your patch:
Client App
+----------------------+
| +-------+ +----------+
| | libpq | | Postgres |
| PQconnect > | | | +-------+
| +------+ | ------- Flow Type (!) -------> | > | |
| +- < | Hook | < | <------- Error Result -------- | < | |
| [ get +------+ | | | |
| token ] | | | | |
| | | | | | Hooks |
| v | | | | |
| PQconnect > | ----> | ------ Access Token ---------> | > | |
| | | <--- Authz Success/Failure --- | < | |
| +-------+ | +-------+
+----------------------+ +----------+
Rather than decouple things, I think this proposal drives a spike
through the client app, libpq, and the server. Please correct me if I've
misunderstood pieces of the patch, but the following is my view of it:
What used to be a validator hook on the server side now actively
participates in the client-side flow for some reason. (I still don't
understand what the server is supposed to do with that knowledge.
Changing your authz requirements based on the flow the client wants to
use seems like a good way to introduce bugs.)
The client-side hook is now coupled to the application logic: you have
to know to expect an error from the first PQconnect*() call, then check
whatever magic your hook has done for you to be able to set up the
second call to PQconnect*() with the correctly scoped bearer token. So
if you want to switch between the internal libpq OAuth implementation
and your own hook, you have to rewrite your app logic.
On top of all that, the "flow type code" being sent is a custom
extension to OAUTHBEARER that appears to be incompatible with the RFC's
discovery exchange (which is done by sending an empty auth token during
the first round trip).
> We consider the following set of flows to be minimum required:
> - Client Credentials - For Service to Service scenarios.
Okay, that's simple enough that I think it could probably be maintained
inside libpq with minimal cost. At the same time, is it complicated
enough that you need libpq to do it for you?
Maybe once we get the hooks ironed out, it'll be more obvious what the
tradeoff is...
> If you prefer to pick one of the richer flows, Authorization code for
> GUI scenarios is probably much more widely used.
> Plus it's easier to implement too, as interaction goes through a
> series of callbacks. No polling required.
I don't think flows requiring the invocation of web browsers and custom
URL handlers are a clear fit for libpq. For a first draft, at least, I
think that use case should be pushed upward into the client application
via a custom hook.
>> If we want to support refresh tokens, I believe we should be developing
>> a plan to cache and secure them within the client. They should be used
>> as an accelerator for other flows, not as their own flow.
> It's considered a separate "grant_type" in the specs / APIs.
> https://openid.net/specs/openid-connect-core-1_0.html#RefreshTokens
Yes, but that doesn't mean we have to expose it to users via a
connection option. You don't get a refresh token out of the blue; you
get it by going through some other flow, and then you use it in
preference to going through that flow again later.
> For the clients, it would be storing the token and using it to authenticate.
> On the question of sensitivity, secure credentials stores are
> different for each platform, with a lot of cloud offerings for this.
> pgAdmin, for example, has its own way to secure credentials to avoid
> asking users for passwords every time the app is opened.
> I believe we should delegate the refresh token management to the clients.
Delegating to client apps would be fine (and implicitly handled by a
token hook, because the client app would receive the refresh token
directly rather than going through libpq). Delegating to end users, not
so much. Printing a refresh token to stderr as proposed here is, I
think, making things unnecessarily difficult (and/or dangerous) for users.
>> 3) I don't like the departure from the OAUTHBEARER mechanism that's
>> presented here. For one, since I can't see a sample plugin that makes
>> use of the "flow type" magic numbers that have been added, I don't
>> really understand why the extension to the mechanism is necessary.
> I don't think it's much of a departure, but rather a separation of
> responsibilities between libpq and upstream clients.
Given the proposed architectures above, 1) I think this is further
coupling the components, not separating them; and 2) I can't agree that
an incompatible discovery mechanism is "not much of a departure". If
OAUTHBEARER's functionality isn't good enough for some reason, let's
talk about why.
> As libpq can be used in different apps, the client would need
> different types of flows/grants.
> I.e. those need to be provided to libpq at connection initialization
> or some other point.
Why do libpq (or the server!) need to know those things at all, if
they're not going to implement the flow?
> We will change to "grant_type" though and use string to be closer to the spec.
> What do you think is the best way for the client to signal which OAUTH
> flow should be used?
libpq should not need to know the grant type in use if the client is
bypassing its internal implementation entirely.
Thanks,
--Jacob
From | Date | Subject | |
---|---|---|---|
Next Message | Jacob Champion | 2022-11-29 21:19:59 | Re: [PoC] Federated Authn/z with OAUTHBEARER |
Previous Message | Jeff Davis | 2022-11-29 20:59:47 | Re: Collation version tracking for macOS |