From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Peter <pmc(at)citylink(dot)dinoex(dot)sub(dot)org>, pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI ?) |
Date: | 2020-01-14 20:12:07 |
Message-ID: | 20200114201207.GA3195@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
Greetings,
* Tom Lane (tgl(at)sss(dot)pgh(dot)pa(dot)us) wrote:
> I wrote:
> > Here's a draft patch that cleans up all the logic errors I could find.
>
> So last night I was assuming that this problem just requires more careful
> attention to what to return in the error exit paths. In the light of
> morning, though, I realize that the algorithms involved in
> be-secure-gssapi.c and fe-secure-gssapi.c are just fundamentally wrong:
>
> * On the read side, the code will keep looping until it gets a no-data
> error from the underlying socket call. This is silly. In every or
> almost every use, the caller's read length request corresponds to the
> size of a buffer that's meant to be larger than typical messages, so
> that betting that we're going to fill that buffer completely is the
> wrong way to bet. Meanwhile, it's fairly likely that the incoming
> encrypted packet's length *does* correspond to some actual message
> boundary; that would only not happen if the sender is forced to break
> up a message, which ought to be a minority situation, else our buffer
> size choices are too small. So it's very likely that the looping just
> results in doubling the number of read() calls that are made, with
> half of them failing with EWOULDBLOCK. What we should do instead is
> return to the caller whenever we finish handing back the decrypted
> contents of a packet. We can do the read() on the next call, after
> the caller's dealt with that data.
Yeah, I agree that this is a better approach. Doing unnecessary
read()'s certainly isn't ideal but beyond being silly it doesn't sound
like this was fundamentally broken..? (yes, the error cases certainly
weren't properly being handled, I understand that)
> * On the write side, if the code encrypts some data and then gets
> EWOULDBLOCK trying to write it, it will tell the caller that it
> successfully wrote that data. If that was all the data the caller
> had to write (again, not so unlikely) this is a catastrophic
> mistake, because the caller will be satisfied and will go to sleep,
> rather than calling again to attempt another write. What we *must*
> do is to reflect the write failure verbatim whether or not we
> encrypted some data. We must remember how much data we encrypted
> and then discount that much of the caller's supplied data next time.
> There are hints in the existing comments that somebody understood
> this at one point, but the code isn't acting that way today.
That's a case I hadn't considered and you're right- the algorithm
certainly wouldn't work in such a case. I don't recall specifically if
the code had handled it better previously, or not, but I do recall there
was something previously about being given a buffer and then having the
API defined as "give me back the exact same buffer because I had to
stop" and I recall finding that to ugly, but I get it now, seeing this
issue. I'd certainly be happier if there was a better alternative but I
don't know that there really is.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2020-01-14 20:22:00 | Re: 12.1 not useable: clientlib fails after a dozen queries (GSSAPI ?) |
Previous Message | Michael Lewis | 2020-01-14 19:48:55 | Re: Multiple Aggregations Order |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2020-01-14 20:12:21 | Re: planner support functions: handle GROUP BY estimates ? |
Previous Message | Stephen Frost | 2020-01-14 19:57:44 | Re: Add pg_file_sync() to adminpack |