From: | "ktm(at)rice(dot)edu" <ktm(at)rice(dot)edu> |
---|---|
To: | Simon Riggs <simon(at)2ndQuadrant(dot)com> |
Cc: | Daniel Farina <daniel(at)heroku(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Faster compression, again |
Date: | 2012-03-15 22:40:09 |
Message-ID: | 20120315224009.GO7440@aart.rice.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Mar 15, 2012 at 10:14:12PM +0000, Simon Riggs wrote:
> On Wed, Mar 14, 2012 at 6:06 PM, Daniel Farina <daniel(at)heroku(dot)com> wrote:
>
> > If we're curious how it affects replication
> > traffic, I could probably gather statistics on LZO-compressed WAL
> > traffic, of which we have a pretty huge amount captured.
>
> What's the compression like for shorter chunks of data? Is it worth
> considering using this for the libpq copy protocol and therefore
> streaming replication also?
>
> --
> Simon Riggs http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services
Here is a pointer to some tests with Snappy+CouchDB:
They checked compression on smaller chunks of data. I have extracted the
basic results. The first number is the original size in bytes, followed
by the compressed size in bytes, the percent compressed and the compression
ratio:
77 -> 60, 90% or 1.1:1
120 -> 104, 87% or 1.15:1
127 -> 80, 63% or 1.6:1
5942 -> 2930, 49% or 2:1
It looks like a good candidate for both the libpq copy protocol and
streaming replication. My two cents.
Regards,
Ken
From | Date | Subject | |
---|---|---|---|
Next Message | Thom Brown | 2012-03-15 22:41:21 | Re: Command Triggers, v16 |
Previous Message | Daniel Farina | 2012-03-15 22:34:35 | Re: Faster compression, again |