Re: Logical Replica ReorderBuffer Size Accounting Issues

From: torikoshia <torikoshia(at)oss(dot)nttdata(dot)com>
To: "Wei Wang (Fujitsu)" <wangw(dot)fnst(at)fujitsu(dot)com>, sawada(dot)mshk(at)gmail(dot)com
Cc: Alex Richman <alexrichman(at)onesignal(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-bugs(at)lists(dot)postgresql(dot)org, Niels Stevens <niels(dot)stevens(at)onesignal(dot)com>
Subject: Re: Logical Replica ReorderBuffer Size Accounting Issues
Date: 2024-05-20 07:02:21
Message-ID: c6155e15b55d812d7281b1d5dc26f0be@oss.nttdata.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

Hi,

Thank you for working on this issue.
It seems that we have also faced the same one.

> On Wed, May 24, 2023 at 9:27 AMMasahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>
> wrote:
>> Yes, it's because the above modification doesn't fix the memory
>> accounting issue but only reduces memory bloat in some (extremely bad)
>> cases. Without this modification, it was possible that the maximum
>> actual memory usage could easily reach several tens of times as
>> logical_decoding_work_mem (e.g. 4GB vs. 256MB as originally reported).
>> Since the fact that the reorderbuffer doesn't account for memory
>> fragmentation etc is still there, it's still possible that the actual
>> memory usage could reach several times as logical_decoding_work_mem.
>> In my environment, with reproducer.sh you shared, the total actual
>> memory usage reached up to about 430MB while logical_decoding_work_mem
>> being 256MB. Probably even if we use another type for memory allocator
>> such as AllocSet a similar issue will still happen. If we don't want
>> the reorderbuffer memory usage never to exceed
>> logical_decoding_work_mem, we would need to change how the
>> reorderbuffer uses and accounts for memory, which would require much
>> work, I guess.

Considering the manual says that logical_decoding_work_mem "specifies
the maximum amount of memory to be used by logical decoding" and this
would be easy for users to tune, it may be best to do this work.
However..

>>> One idea to deal with this issue is to choose the block sizes
>>> carefully while measuring the performance as the comment shows:
>>>
>>>
>>>
>>> /*
>>> * XXX the allocation sizes used below pre-date generation
>>> context's block
>>> * growing code. These values should likely be benchmarked and
>>> set to
>>> * more suitable values.
>>> */
>>> buffer->tup_context = GenerationContextCreate(new_ctx,
>>> "Tuples",
>>>
>>> SLAB_LARGE_BLOCK_SIZE,
>>>
>>> SLAB_LARGE_BLOCK_SIZE,
>>>
>>> SLAB_LARGE_BLOCK_SIZE);

since this idea can prevent the issue in not all but some situations,
this may be good for mitigation measure.
One concern is this would cause more frequent malloc(), but it is better
than memory bloat, isn't it?

--
Regards,

--
Atsushi Torikoshi
NTT DATA Group Corporation

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Tender Wang 2024-05-20 07:31:10 Re: BUG #18468: CREATE TABLE ... LIKE leaves orphaned column reference in extended statistics
Previous Message Noah Misch 2024-05-18 22:23:11 Re: relfrozenxid may disagree with row XIDs after 1ccc1e05ae