From: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com> |
---|---|
To: | Mladen Gogala <gogala(dot)mladen(at)gmail(dot)com>, pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Force re-compression with lz4 |
Date: | 2021-10-18 15:01:04 |
Message-ID: | 6b5a2ce7-6119-505e-5d7b-32e55f87f882@aklaver.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 10/18/21 06:41, Mladen Gogala wrote:
>
> On 10/18/21 01:07, Michael Paquier wrote:
>> CPU-speaking, LZ4 is*much* faster than pglz when it comes to
>> compression or decompression with its default options. The
>> compression ratio is comparable between both, still LZ4 compresses in
>> average less than PGLZ.
>> --
>> Michael
>
> LZ4 works much better with deduplication tools like Data Domain or Data
> Domain Boost (client side deduplication). With zip or gzip compression,
> deduplication ratios are much lower than with LZ4. Most of the modern
> backup tools (DD, Veeam, Rubrik, Commvault) support deduplication. LZ4
> algorithm uses less CPU than zip, gzip or bzip2 and works much better
> with deduplication algorithms employed by the backup tools. This is
> actually a very big and positive change.
Not sure how much this applies to the Postgres usage of lz4. As I
understand it, this is only used internally for table compression. When
using pg_dump compression gzip is used. Unless you pipe plain text
output through some other program.
>
> Disclosure:
>
> I used to work for Commvault as a senior PS engineer. Commvault was the
> first tool on the market to combine LZ4 and deduplication.
>
> Regards
>
>
--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Chris Williams | 2021-10-19 00:37:01 | Unsynchronized parallel dumps from 13.3 replica produced by pg_dump |
Previous Message | Ramnivas Chaurasia | 2021-10-18 13:47:33 | Debug PostgreSQL logical apply process |