From: | Dilip Kumar <dilipbalaut(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Michael Paquier <michael(at)paquier(dot)xyz>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions |
Date: | 2020-02-05 04:18:48 |
Message-ID: | CAFiTN-vZ2EHjQQaGBJEbf-NnkrFkTEj+VbjrGtrfv=ck7crd+w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Feb 5, 2020 at 9:27 AM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> On Tue, Feb 4, 2020 at 11:00 AM Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> >
> > On Tue, Jan 28, 2020 at 11:43 AM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> > >
> > >
> > > One more thing we can do is to identify whether the tuple belongs to
> > > toast relation while decoding it. However, I think to do that we need
> > > to have access to relcache at that time and that might add some
> > > overhead as we need to do that for each tuple. Can we investigate
> > > what it will take to do that and if it is better than setting a bit
> > > during WAL logging.
> > >
> > I have done some more analysis on this and it appears that there are
> > few problems in doing this. Basically, once we get the confirmed
> > flush location, we advance the replication_slot_catalog_xmin so that
> > vacuum can garbage collect the old tuple. So the problem is that
> > while we are collecting the changes in the ReorderBuffer our catalog
> > version might have removed, and we might not find any relation entry
> > with that relfilenodeid (because it is dropped or altered in the
> > future).
> >
>
> Hmm, this means this can also occur while streaming the changes. The
> main reason as I understand is that it is because before decoding
> commit, we don't know whether these changes are already sent to the
> subscriber (based on confirmed_flush_location/start_decoding_at).
Right.
>I think it is better to skip streaming such transactions as we can't
> make the right decision about these and as this can happen generally
> after the crash for the first few transactions, it shouldn't matter
> much if we serialize such transactions instead of streaming them.
I think the idea makes sense to me.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | ideriha.takeshi@fujitsu.com | 2020-02-05 04:50:32 | RE: Global shared meta cache |
Previous Message | Dilip Kumar | 2020-02-05 04:15:47 | Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions |