Re: Logical Replica ReorderBuffer Size Accounting Issues

From: Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>
To: "wangw(dot)fnst(at)fujitsu(dot)com" <wangw(dot)fnst(at)fujitsu(dot)com>
Cc: Alex Richman <alexrichman(at)onesignal(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, "pgsql-bugs(at)lists(dot)postgresql(dot)org" <pgsql-bugs(at)lists(dot)postgresql(dot)org>, Niels Stevens <niels(dot)stevens(at)onesignal(dot)com>
Subject: Re: Logical Replica ReorderBuffer Size Accounting Issues
Date: 2023-05-09 01:33:08
Message-ID: CAD21AoBEYN43nejnHpg819+=jtq8u8G3DFLcFAtBUtNydnbiQg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

On Fri, Jan 13, 2023 at 8:17 PM wangw(dot)fnst(at)fujitsu(dot)com
<wangw(dot)fnst(at)fujitsu(dot)com> wrote:
>
> On Thu, Jan 12, 2023 at 21:02 PM Alex Richman <alexrichman(at)onesignal(dot)com> wrote:
> > On Thu, 12 Jan 2023 at 10:44, wangw(dot)fnst(at)fujitsu(dot)com
> > <wangw(dot)fnst(at)fujitsu(dot)com> wrote:
> > > I think parallelism doesn't affect this problem. Because for a walsender, I
> > > think it will always read the wal serially in order. Please let me know if I'm
> > > missing something.
> > I suspect it's more about getting enough changes into the WAL quickly enough
> > for walsender to not spend any time idle. I suppose you could stack the deck
> > towards this by first disabling the subscription, doing the updates to spool a
> > bunch of changes in the WAL, then enabling the subscription again. Perhaps
> > there is also some impact in the WAL records interleaving from the concurrent
> > updates and making more work for the reorder buffer.
> > The servers I am testing on are quite beefy, so it might be a little harder to
> > generate sufficient load if you're testing locally on a laptop or something.
> >
> > > And I tried to use the table structure and UPDATE statement you said. But
> > > unfortunately I didn't catch 1GB or unexpected (I mean a lot size beyond
> > 256MB)
> > > usage in rb->tup_context. Could you please help me to confirm my test? Here
> > is
> > > my test details:
> > Here's test scripts that replicate it for me: [1]
> > This is on 15.1, installed on debian-11, running on GCP n2-highmem-80 (IceLake)
> > /w 24x Local SSD in raid0.
>
> Thanks for the details you shared.
>
> Yes, I think you are right. I think I reproduced this problem as you suggested
> (Update the entire table in parallel). And I can reproduce this problem on both
> current HEAD and REL_15_1. The memory used in rb->tup_context can reach 350M
> in HEAD and reach 600MB in REL_15_1.
>
> Here are my steps to reproduce:
> 1. Apply the attached diff patch to add some logs for confirmation.
> 2. Use the attached reproduction script to reproduce the problem.
> 3. Confirm the debug log that is output to the log file pub.log.
>
> After doing some research, I agree with the idea you mentioned before. I think
> this problem is caused by the implementation of 'Generational allocator' or the
> way we uses the API of 'Generational allocator'.

I think there are two separate issues. One is a pure memory accounting
issue: since the reorderbuffer accounts the memory usage by
calculating actual tuple size etc. it includes neither the chunk
header size nor fragmentations within blocks. So I can understand why
the output of MemoryContextStats(rb->context) could be two or three
times higher than logical_decoding_work_mem and doesn't match rb->size
in some cases.

However it cannot explain the original issue that the memory usage
(reported by MemoryContextStats(rb->context)) reached 5GB in spite of
logilca_decoding_work_mem being 256MB, which seems like a memory leak
bug or something we ignore the memory limit. My colleague Drew
Callahan helped me investigate this issue and we found out that in
ReorderBufferIterTXNInit() we restore all changes of the top
transaction as well as all subtransactions but we don't respect the
memory limit when restoring changes. It's normally not a problem since
each decoded change is not typically large and we restore up to 4096
changes. However, if the transaction has many subtransactions, we
restore 4096 changes for each subtransaction at once, temporarily
consuming much memory. This behavior can explain the symptoms of the
original issue but one thing I'm unsure is that Alex reported
replacing MemoryContextAlloc() in ReorderBufferGetChange() with
malloc() resolved the issue. I think it can resolve/mitigate the
former issue (memory accounting issue) but IIUC it cannot for the
latter problem. There might be other factors for this problem. Anyway,
I've attached a simple reproducer that works as a tap test for
test_decoding.

I think we need to discuss two things separately and should deal with
at least the latter problem.

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com

Attachment Content-Type Size
002_rb_memory.pl text/x-perl 870 bytes

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Joakim Goldkuhl 2023-05-09 07:00:22 Re: BUG #17925: Incorrect select query result
Previous Message PG Bug reporting form 2023-05-08 20:12:41 BUG #17926: Segfault in SELECT