From: | Dilip Kumar <dilipbalaut(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Peter Smith <smithpb2250(at)gmail(dot)com>, Ajin Cherian <itsajin(at)gmail(dot)com>, vignesh C <vignesh21(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Skip collecting decoded changes of already-aborted transactions |
Date: | 2024-12-12 06:01:03 |
Message-ID: | CAFiTN-us5d3yJannbb+w=0pyTJXycVT68z44mA7mjbt-JZ4tvg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Dec 12, 2024 at 11:08 AM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> On Wed, Dec 11, 2024 at 8:21 AM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> >
> > On Wed, Dec 11, 2024 at 3:18 AM Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
> > >
> > > On Mon, Dec 9, 2024 at 10:19 PM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> >
> > > > >
> > > > > If the largest transaction is non-streamable, won't the transaction
> > > > > returned by ReorderBufferLargestTXN() in the other case already
> > > > > suffice the need?
> > > >
> > > > I see your point, but I don’t think it’s quite the same. When
> > > > ReorderBufferCanStartStreaming() is true, the function
> > > > ReorderBufferLargestStreamableTopTXN() looks for the largest
> > > > transaction among those that have a base_snapshot. So, if the largest
> > > > transaction is aborted but hasn’t yet received a base_snapshot, it
> > > > will instead select the largest transaction that does have a
> > > > base_snapshot, which could be significantly smaller than the largest
> > > > aborted transaction.
> > >
> > > IIUC the transaction entries in reorderbuffer have the base snapshot
> > > before decoding the first change (see SnapBuildProcessChange()). In
> > > which case the transaction doesn't have the base snapshot and has the
> > > largest amount of changes? Subtransaction entries could transfer its
> > > base snapshot to its parent transaction entry but such subtransactions
> > > will be picked by ReorderBufferLargestTXN().
> > >
> > IIRC, there could be cases where reorder buffers of transactions can
> > grow in size without having a base snapshot, I think transactions
> > doing DDLs and generating a lot of INVALIDATION messages could fall in
> > such a category.
> >
>
> Are we recording such changes in the reorder buffer? If so, can you
> please share how?
xact_decode, do add the XLOG_XACT_INVALIDATIONS in the reorder buffer
and for such changes we don't call SnapBuildProcessChange() that means
it is possible to collect such changes in reorder buffer without
setting the base_snapshot
AFAICU, the main idea behind skipping aborts is to
> avoid sending a lot of data to the client that later needs to be
> discarded or cases where we spent resources/time spilling the changes
> that later need to be discarded. In that vein, the current idea of the
> patch where it truncates and skips aborted xacts before streaming or
> spilling them sounds reasonable.
I believe in one of my previous responses (a few emails above), I
agreed that it's a reasonable goal to check for aborted transactions
just before spilling or streaming, and if we detect an aborted
transaction, we can avoid streaming/spilling and simply discard the
changes. However, I wanted to make a point that if we have a large
aborted transaction without a base snapshot (assuming that's
possible), we might end up streaming many small transactions to stay
under the memory limit. Even though we try to stay within the limit,
we still might not succeed because the main issue is the large aborted
transaction, which doesn't have a base snapshot.
So, instead of streaming many small transactions, if we had selected
the largest transaction first and checked if it was aborted, we could
have avoided streaming all those smaller transactions. I agree this is
a hypothetical scenario and may not be worth optimizing, and that's
completely fair. I just wanted to clarify the point I raised when I
first started reviewing this patch.
I haven't tried it myself, but I believe this scenario could be
created by starting a transaction that performs multiple DDLs and then
ultimately gets aborted.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Ashutosh Bapat | 2024-12-12 06:04:47 | Re: Difference in dump from original and restored database due to NOT NULL constraints on children |
Previous Message | Amit Kapila | 2024-12-12 05:43:39 | Re: Memory leak in WAL sender with pgoutput (v10~) |