Re: Compression of bigger WAL records

From: Fujii Masao <masao(dot)fujii(at)oss(dot)nttdata(dot)com>
To: "Andrey M(dot) Borodin" <x4mmm(at)yandex-team(dot)ru>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Compression of bigger WAL records
Date: 2025-01-28 17:10:18
Message-ID: 87d08e95-0e55-4bcf-9f4b-5c9e7b8d91f8@oss.nttdata.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 2025/01/22 3:24, Andrey M. Borodin wrote:
>
>
>> On 12 Jan 2025, at 17:43, Andrey M. Borodin <x4mmm(at)yandex-team(dot)ru> wrote:
>>
>> I attach a prototype patch.
>
> Here's v2, now it passes all the tests with wal_debug.

I like the idea of WAL compression more.

With the current approach, each backend needs to allocate memory twice
the size of the total WAL record. Right? One area is for the gathered
WAL record data (from rdt and registered_buffers), and the other is for
storing the compressed data. Could this lead to potential memory usage
concerns? Perhaps we should consider setting a limit on the maximum
memory each backend can use for WAL compression?

Regards,

--
Fujii Masao
Advanced Computing Technology Center
Research and Development Headquarters
NTT DATA CORPORATION

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Vladlen Popolitov 2025-01-28 17:16:18 Re: Increase of maintenance_work_mem limit in 64-bit Windows
Previous Message Andres Freund 2025-01-28 17:01:41 Re: Interrupts vs signals