From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Euler Taveira <euler(at)timbira(dot)com> |
Cc: | Pgsql Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: libpq compression |
Date: | 2012-06-14 05:19:30 |
Message-ID: | 29853.1339651170@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Euler Taveira <euler(at)timbira(dot)com> writes:
> There was already some discussion about compressing libpq data [1][2][3].
> Recently, I faced a scenario that would become less problematic if we have had
> compression support. The scenario is frequent data load (aka COPY) over
> slow/unstable links. It should be executed in a few hundreds of PostgreSQL
> servers all over Brazil. Someone could argue that I could use ssh tunnel to
> solve the problem but (i) it is complex because it involves a different port
> in the firewall and (ii) it's an opportunity to improve other scenarios like
> reducing bandwidth consumption during replication or normal operation over
> slow/unstable links.
I still think that pushing this off to openssl (not an ssh tunnel, but
the underlying transport library) would be an adequate solution.
If you are shoving data over a connection that is long enough to need
compression, the odds that every bit of it is trustworthy seem pretty
small, so you need encryption too.
We do need the ability to tell openssl to use compression. We don't
need to implement it ourselves, nor to bring a bunch of new library
dependencies into our builds. I especially think that importing bzip2
is a pretty bad idea --- it's not only a new dependency, but bzip2's
compression versus speed tradeoff is entirely inappropriate for this
use-case.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2012-06-14 06:59:16 | Re: hint bit i/o reduction |
Previous Message | Tom Lane | 2012-06-14 05:04:52 | Re: Ability to listen on two unix sockets |