Re: Reduce useless changes before reassembly during logical replication

From: Bharath Rupireddy <bharath(dot)rupireddyforpostgres(at)gmail(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: li jie <ggysxcq(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Reduce useless changes before reassembly during logical replication
Date: 2024-01-18 11:14:00
Message-ID: CALj2ACVGxXpPi822ESNyD2zz3xhuAqwjCYpVLeN9DUFMU-1pdQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Jan 18, 2024 at 2:47 PM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> On Thu, Jan 18, 2024 at 12:12 PM Bharath Rupireddy
> <bharath(dot)rupireddyforpostgres(at)gmail(dot)com> wrote:
> >
> > On Wed, Jan 17, 2024 at 11:45 AM li jie <ggysxcq(at)gmail(dot)com> wrote:
> > >
> > > Hi hackers,
> > >
> > > During logical replication, if there is a large write transaction, some
> > > spill files will be written to disk, depending on the setting of
> > > logical_decoding_work_mem.
> > >
> > > This behavior can effectively avoid OOM, but if the transaction
> > > generates a lot of change before commit, a large number of files may
> > > fill the disk. For example, you can update a TB-level table.
> > >
> > > However, I found an inelegant phenomenon. If the modified large table is not
> > > published, its changes will also be written with a large number of spill files.
> > > Look at an example below:
> >
> > Thanks. I agree that decoding and queuing the changes of unpublished
> > tables' data into reorder buffer is an unnecessary task for walsender.
> > It takes processing efforts (CPU overhead), consumes disk space and
> > uses memory configured via logical_decoding_work_mem for a replication
> > connection inefficiently.
> >
>
> This is all true but note that in successful cases (where the table is
> published) all the work done by FilterByTable(accessing caches,
> transaction-related stuff) can add noticeable overhead as anyway we do
> that later in pgoutput_change().

Right. Overhead for published tables need to be studied. A possible
way is to mark the checks performed in
FilterByTable/filter_by_table_cb and skip the same checks in
pgoutput_change. I'm not sure if this works without any issues though.

--
Bharath Rupireddy
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Amit Kapila 2024-01-18 11:19:28 Re: Synchronizing slots from primary to standby
Previous Message Alvaro Herrera 2024-01-18 11:12:03 Re: Report planning memory in EXPLAIN ANALYZE