From: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> |
---|---|
To: | Dilip Kumar <dilipbalaut(at)gmail(dot)com> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Peter Smith <smithpb2250(at)gmail(dot)com>, Ajin Cherian <itsajin(at)gmail(dot)com>, vignesh C <vignesh21(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Skip collecting decoded changes of already-aborted transactions |
Date: | 2024-12-10 21:48:21 |
Message-ID: | CAD21AoApi6Mh6DJwyh_gmFDHAc_j_zDVcWJ-PaFknyaPjKE_SQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Dec 9, 2024 at 10:19 PM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
>
> On Tue, Dec 10, 2024 at 11:09 AM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> >
> > On Tue, Dec 10, 2024 at 10:59 AM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> > >
> > > On Tue, Nov 26, 2024 at 3:02 AM Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
> > >
> > > >
> > > > I've attached a new version patch that incorporates all comments I got so far.
> > > >
> > > > I think the patch is in good shape but I'm considering whether we
> > > > might want to call ReorderBufferToastReset() after truncating all
> > > > changes, in ReorderBufferTruncateTXNIfAborted() just in case. Will
> > > > investigate further.
> > > >
> > >
> > > There’s something that seems a bit odd to me. Consider the case where
> > > the largest transaction(s) are aborted. If
> > > ReorderBufferCanStartStreaming() returns true, the changes from this
> > > transaction will only be discarded if it's a streamable transaction.
> > > However, if ReorderBufferCanStartStreaming() is false, the changes
> > > will be discarded regardless.
> > >
> > > What seems strange to me in this patch is truncating the changes of a
> > > large aborted transaction depending on whether we need to stream or
> > > spill but actually that should be completely independent IMHO. My
> > > concern is that if the largest transaction is aborted but isn’t yet
> > > streamable, we might end up picking the next transaction, which could
> > > be much smaller. This smaller transaction might not help us stay
> > > within the memory limit, and we could repeat this process for a few
> > > more transactions. In contrast, it might be more efficient to simply
> > > discard the large aborted transaction, even if it’s not streamable, to
> > > avoid this issue.
> > >
> >
> > If the largest transaction is non-streamable, won't the transaction
> > returned by ReorderBufferLargestTXN() in the other case already
> > suffice the need?
>
> I see your point, but I don’t think it’s quite the same. When
> ReorderBufferCanStartStreaming() is true, the function
> ReorderBufferLargestStreamableTopTXN() looks for the largest
> transaction among those that have a base_snapshot. So, if the largest
> transaction is aborted but hasn’t yet received a base_snapshot, it
> will instead select the largest transaction that does have a
> base_snapshot, which could be significantly smaller than the largest
> aborted transaction.
IIUC the transaction entries in reorderbuffer have the base snapshot
before decoding the first change (see SnapBuildProcessChange()). In
which case the transaction doesn't have the base snapshot and has the
largest amount of changes? Subtransaction entries could transfer its
base snapshot to its parent transaction entry but such subtransactions
will be picked by ReorderBufferLargestTXN().
Regards,
--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com
From | Date | Subject | |
---|---|---|---|
Next Message | Masahiko Sawada | 2024-12-10 21:50:16 | Re: long-standing data loss bug in initial sync of logical replication |
Previous Message | Tom Lane | 2024-12-10 21:45:50 | Re: Assert failure on running a completed portal again |