From: | Dilip Kumar <dilipbalaut(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | shveta malik <shveta(dot)malik(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, "Zhijie Hou (Fujitsu)" <houzj(dot)fnst(at)fujitsu(dot)com>, Nisha Moond <nisha(dot)moond412(at)gmail(dot)com> |
Subject: | Re: Conflict Detection and Resolution |
Date: | 2024-07-05 06:27:44 |
Message-ID: | CAFiTN-tnLuh4AVLdxEpG8x42DP3cV_yMy641iV+LOgYAr1WmkQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Jul 4, 2024 at 5:37 PM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> > So, the situation will be the same. We can even
> > > decide to spill the data to files if the decision is that we need to
> > > wait to avoid network buffer-fill situations. But note that the wait
> > > in apply worker has consequences that the subscriber won't be able to
> > > confirm the flush position and publisher won't be able to vacuum the
> > > dead rows and we won't be remove WAL as well. Last time when we
> > > discussed the delay_apply feature, we decided not to proceed because
> > > of such issues. This is the reason I proposed a cap on wait time.
> >
> > Yes, spilling to file or cap on the wait time should help, and as I
> > said above maybe a parallel apply worker can also help.
> >
>
> It is not clear to me how a parallel apply worker can help in this
> case. Can you elaborate on what you have in mind?
If we decide to wait at commit time, and before starting to apply if
we already see a remote commit_ts clock is ahead, then if we apply
such transactions using the parallel worker, wouldn't it solve the
issue of the network buffer congestion? Now the apply worker can move
ahead and fetch new transactions from the buffer as our waiting
transaction will not block it. I understand that if this transaction
is going to wait at commit then any future transaction that we are
going to fetch might also going to wait again because if the previous
transaction committed before is in the future then the subsequent
transaction committed after this must also be in future so eventually
that will also go to some another parallel worker and soon we end up
consuming all the parallel worker if the clock skew is large. So I
won't say this will resolve the problem and we would still have to
fall back to the spilling to the disk but that's just in the worst
case when the clock skew is really huge. In most cases which is due
to slight clock drift by the time we apply the medium to large size
transaction, the local clock should be able to catch up the remote
commit_ts and we might not have to wait in most of the cases.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2024-07-05 06:50:49 | Re: walsender.c comment with no context is hard to understand |
Previous Message | David Rowley | 2024-07-05 05:54:38 | Re: Use generation memory context for tuplestore.c |