From: | Ron <ronljohnsonjr(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Force re-compression with lz4 |
Date: | 2021-10-17 16:39:07 |
Message-ID: | 1b985e59-74b2-7a0d-9594-9df2c76efb4c@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 10/17/21 11:36 AM, Ron wrote:
> On 10/17/21 10:12 AM, Florents Tselai wrote:
>> Hello,
>>
>> I have a table storing mostly text data (40M+ rows) that has
>> pg_total_relation_size ~670GB.
>> I’ve just upgraded to postgres 14 and I’m now eager to try the new LZ4
>> compression.
>>
>> I’ve altered the column to use the new lz4 compression, but that only
>> applies to new rows.
>>
>> What’s the recommended way of triggering the re-evaluation for
>> pre-existing rows?
>>
>> I tried wrapping a function like the following, but apparently each old
>> record retains the compression applied.
>> text_corpus=(SELECT t.text from...); delete from t where id=; insert into
>> t(id, text) values (id, text_corpus);
>
> Because it's all in one transaction?
>
>> Fttb, I resorted to preparing an external shell script to execute against
>> the db but that’s too slow as it moves data in&out the db.
>>
>> Is there a smarter way to do this ?
>
> Even with in-place compression, you've got to read the uncompressed data.
>
> Does your shell script process one record at a time? Maybe do ranges:
> COPY (SELECT * FROM t WHERE id BETWEEN x AND y) TO '/some/file.csv';
> DELETE FROM t WHERE id BETWEEN x AND y;
I forgot to mention:
VACUUM t;
> COPY t FROM '/some/file.csv';
--
Angular momentum makes the world go 'round.
From | Date | Subject | |
---|---|---|---|
Next Message | Magnus Hagander | 2021-10-17 17:17:14 | Re: Force re-compression with lz4 |
Previous Message | Ron | 2021-10-17 16:36:40 | Re: Force re-compression with lz4 |