Re: libpq compression (part 3)

From: Jacob Champion <jacob(dot)champion(at)enterprisedb(dot)com>
To: Jacob Burroughs <jburroughs(at)instructure(dot)com>
Cc: Jelte Fennema-Nio <postgres(at)jeltef(dot)nl>, Robert Haas <robertmhaas(at)gmail(dot)com>, "Andrey M(dot) Borodin" <x4mmm(at)yandex-team(dot)ru>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: libpq compression (part 3)
Date: 2024-05-21 19:42:57
Message-ID: CAOYmi+mQGHm2wRO2wndAHTqFyM8iXfZ7YMBy993jWWMD2YKp5Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, May 21, 2024 at 12:08 PM Jacob Burroughs
<jburroughs(at)instructure(dot)com> wrote:
> I think I thought I was writing about something else when I wrote that
> :sigh:. I think what I really should have written was a version of
> the part below, which is that we use streaming decompression, only
> decompress 8kb at a time, and for pre-auth messages limit them to
> `PG_MAX_AUTH_TOKEN_LENGTH` (65535 bytes), which isn't really enough
> data to actually cause any real-world pain by needing to decompress vs
> the equivalent pain of sending invalid uncompressed auth packets.

Okay. So it sounds like your position is similar to Robert's from
earlier: prefer allowing unauthenticated compressed packets for
simplicity, as long as we think it's safe for the server. (Personally
I still think a client that compresses its password packets is doing
it wrong, and we could help them out by refusing that.)

> We own both the canonical client and server, so those are both covered
> here. I would think it would be the responsibility of any other
> system that maintains its own implementation of the postgres protocol
> and chooses to support the compression protocol to perform its own
> mitigations against potential compression security issues.

Sure, but if our official documentation is "here's an extremely
security-sensitive feature, figure it out!" then we've done a
disservice to the community.

> Should we
> put the fixed message size limits (that have de facto been part of the
> protocol since 2021, even if they weren't documented as such) into the
> protocol documentation?

Possibly? I don't know if the other PG-compatible implementations use
the same limits. It might be better to say "limits must exist".

> ( I don't really see how one could implement other tooling that used
> pg compression without using streaming compression, as the protocol
> never hands over a standalone blob of compressed data: all compressed
> data is always part of a stream, but even with streaming decompression
> you still need some kind of limits or you will just chew up memory.)

Well, that's a good point; I wasn't thinking about the streaming APIs
themselves. If the easiest way to implement decompression requires the
use of an API that shouts "hey, give me guardrails!", then that helps
quite a bit. I really need to look into the attack surface of the
three algorithms.

--Jacob

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Jeff Davis 2024-05-21 19:54:47 Re: SQL:2011 application time
Previous Message Jacob Burroughs 2024-05-21 19:26:32 Re: libpq compression (part 3)