From: | Ron <ronljohnsonjr(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Force re-compression with lz4 |
Date: | 2021-10-17 16:36:40 |
Message-ID: | 08365af1-e076-fe67-6de8-333e1118281e@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 10/17/21 10:12 AM, Florents Tselai wrote:
> Hello,
>
> I have a table storing mostly text data (40M+ rows) that has
> pg_total_relation_size ~670GB.
> I’ve just upgraded to postgres 14 and I’m now eager to try the new LZ4
> compression.
>
> I’ve altered the column to use the new lz4 compression, but that only
> applies to new rows.
>
> What’s the recommended way of triggering the re-evaluation for
> pre-existing rows?
>
> I tried wrapping a function like the following, but apparently each old
> record retains the compression applied.
> text_corpus=(SELECT t.text from...); delete from t where id=; insert into
> t(id, text) values (id, text_corpus);
Because it's all in one transaction?
> Fttb, I resorted to preparing an external shell script to execute against
> the db but that’s too slow as it moves data in&out the db.
>
> Is there a smarter way to do this ?
Even with in-place compression, you've got to read the uncompressed data.
Does your shell script process one record at a time? Maybe do ranges:
COPY (SELECT * FROM t WHERE id BETWEEN x AND y) TO '/some/file.csv';
DELETE FROM t WHERE id BETWEEN x AND y;
COPY t FROM '/some/file.csv';
--
Angular momentum makes the world go 'round.
From | Date | Subject | |
---|---|---|---|
Next Message | Ron | 2021-10-17 16:39:07 | Re: Force re-compression with lz4 |
Previous Message | Florents Tselai | 2021-10-17 15:12:17 | Force re-compression with lz4 |