From: | Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Michael Paquier <michael(at)paquier(dot)xyz> |
Cc: | gkokolatos(at)pm(dot)me, Andrew Dunstan <andrew(at)dunslane(dot)net>, Alexander Lakhin <exclusion(at)gmail(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, shiy(dot)fnst(at)fujitsu(dot)com, pgsql-hackers(at)lists(dot)postgresql(dot)org, Rachel Heaton <rachelmheaton(at)gmail(dot)com> |
Subject: | Re: Add LZ4 compression in pg_dump |
Date: | 2023-05-08 18:00:39 |
Message-ID: | f735df01-0bb4-2fbc-1297-73a520cfc534@enterprisedb.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 5/8/23 03:16, Tom Lane wrote:
> I wrote:
>> Michael Paquier <michael(at)paquier(dot)xyz> writes:
>>> While testing this patch, I have triggered an error pointing out that
>>> the decompression path of LZ4 is broken for table data. I can
>>> reproduce that with a dump of the regression database, as of:
>>> make installcheck
>>> pg_dump --format=d --file=dump_lz4 --compress=lz4 regression
>
>> Ugh. Reproduced here ... so we need an open item for this.
>
> BTW, it seems to work with --format=c.
>
The LZ4Stream_write() forgot to move the pointer to the next chunk, so
it was happily decompressing the initial chunk over and over. A bit
embarrassing oversight :-(
The custom format calls WriteDataToArchiveLZ4(), which was correct.
The attached patch fixes this for me.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
Attachment | Content-Type | Size |
---|---|---|
pg-dump-lz4-fix.patch | text/x-patch | 397 bytes |
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2023-05-08 18:20:42 | Re: Add LZ4 compression in pg_dump |
Previous Message | Ranier Vilela | 2023-05-08 17:48:28 | Re: Improve list manipulation in several places |