From: | Andrey Borodin <x4mmm(at)yandex-team(dot)ru> |
---|---|
To: | Fujii Masao <masao(dot)fujii(at)oss(dot)nttdata(dot)com>, Japin Li <japinli(at)hotmail(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Compression of bigger WAL records |
Date: | 2025-01-30 13:26:29 |
Message-ID: | 87E092DC-0E61-426A-8F73-B61A19BCC00D@yandex-team.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> On 23 Jan 2025, at 20:13, Japin Li <japinli(at)hotmail(dot)com> wrote:
>
>
> I find this feature interesting;
Thank you for your interest in the patch!
> however, it cannot be applied to the current
> master (b35434b134b) due to commit 32a18cc0a73.
PFA a rebased version.
>
> I see the patch compresses the WAL record according to the wal_compression,
> IIRC the wal_compression is only used for FPI, right? Maybe we should update
> the description of this parameter.
Yes, I'll udpate documentation in future versions too.
> I see that the wal_compression_threshold defaults to 512. I wonder if you
> chose this value based on testing or randomly.
Voices in my head told me it's a good number.
> On 28 Jan 2025, at 22:10, Fujii Masao <masao(dot)fujii(at)oss(dot)nttdata(dot)com> wrote:
>
> I like the idea of WAL compression more.
Thank you!
> With the current approach, each backend needs to allocate memory twice
> the size of the total WAL record. Right? One area is for the gathered
> WAL record data (from rdt and registered_buffers), and the other is for
> storing the compressed data.
Yes, exactly. And also a decompression buffer for each WAL reader.
> Could this lead to potential memory usage
> concerns? Perhaps we should consider setting a limit on the maximum
> memory each backend can use for WAL compression?
Yes, the limit makes sense.
Also, we can reduce memory consumption by employing a streaming compression. Currently, I'm working on a prototype of such technology, because it would allow wholesale WAL compression. The idea is to reuse compression context from previous records to better compress new records. This would allow efficient compression of even very small records. However, there is exactly 0 chance to get it done in a decent shape before feature freeze.
The chances of getting currently proposed approach to v18 seems slim either... I'm hesitating to register this patch on the CF. What do you think?
Best regards, Andrey Borodin.
Attachment | Content-Type | Size |
---|---|---|
v3-0001-Compress-big-WAL-records.patch | application/octet-stream | 43.0 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2025-01-30 13:26:54 | Re: Parameter NOT NULL to CREATE DOMAIN not the same as CHECK (VALUE IS NOT NULL) |
Previous Message | Pavel Borisov | 2025-01-30 13:22:55 | Re: POC, WIP: OR-clause support for indexes |