From: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, Bruce Momjian <bruce(at)momjian(dot)us>, Peter Eisentraut <peter_e(at)gmx(dot)net>, PostgreSQL-Dev <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [PATCH] COPY .. COMPRESSED |
Date: | 2013-01-16 02:44:45 |
Message-ID: | CAGTBQpbTpWKHo4upnFL8BrM4VvHn+Rw3nuU44HEsNj4AERcPdg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Jan 15, 2013 at 7:46 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Compressing every small packet seems like it'd be overkill and might
>> surprise people by actually reducing performance in the case of lots of
>> small requests.
>
> Yeah, proper selection and integration of a compression method would be
> critical, which is one reason that I'm not suggesting a plugin for this.
> You couldn't expect any-random-compressor to work well. I think zlib
> would be okay though when making use of its stream compression features.
> The key thing there is to force a stream buffer flush (too lazy to look
> up exactly what zlib calls it, but they have the concept) exactly when
> we're about to do a flush to the socket. That way we get cross-packet
> compression but don't have a problem with the compressor failing to send
> the last partial message when we need it to.
Just a "stream flush bit" (or stream reset bit) on the packet header
would do. First packet on any stream would be marked, and that's it.
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2013-01-16 03:01:56 | Re: transforms |
Previous Message | Simon Riggs | 2013-01-16 02:42:58 | log_lock_waits to identify transaction's relation |