From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com> |
Cc: | Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, Greg Nancarrow <gregn4422(at)gmail(dot)com>, Euler Taveira <euler(at)eulerto(dot)com>, Peter Smith <smithpb2250(at)gmail(dot)com>, Rahila Syed <rahilasyed90(at)gmail(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, Önder Kalacı <onderkalaci(at)gmail(dot)com>, japin <japinli(at)hotmail(dot)com>, Michael Paquier <michael(at)paquier(dot)xyz>, David Steele <david(at)pgmasters(dot)net>, Craig Ringer <craig(at)2ndquadrant(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Amit Langote <amitlangote09(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: row filtering for logical replication |
Date: | 2021-07-20 05:23:30 |
Message-ID: | CAA4eK1+3=w++VOY5Z6VMeh1OLs9N+A3nwL+dUEuLgxGx2K-4Rg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Jul 19, 2021 at 7:02 PM Tomas Vondra
<tomas(dot)vondra(at)enterprisedb(dot)com> wrote:
>
> On 7/19/21 1:00 PM, Dilip Kumar wrote:
> > On Mon, Jul 19, 2021 at 3:12 PM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> >> a. Just log it and move to the next row
> >> b. send to stats collector some info about this which can be displayed
> >> in a view and then move ahead
> >> c. just skip it like any other row that doesn't match the filter clause.
> >>
> >> I am not sure if there is any use of sending a row if one of the
> >> old/new rows doesn't match the filter. Because if the old row doesn't
> >> match but the new one matches the criteria, we will anyway just throw
> >> such a row on the subscriber instead of applying it.
> >
> > But at some time that will be true even if we skip the row based on
> > (a) or (c) right. Suppose the OLD row was not satisfying the
> > condition but the NEW row is satisfying the condition, now even if we
> > skip this operation then in the next operation on the same row even if
> > both OLD and NEW rows are satisfying the filter the operation will
> > just be dropped by the subscriber right? because we did not send the
> > previous row when it first updated to value which were satisfying the
> > condition. So basically, any row is inserted which did not satisfy
> > the condition first then post that no matter how many updates we do to
> > that row either it will be skipped by the publisher because the OLD
> > row was not satisfying the condition or it will be skipped by the
> > subscriber as there was no matching row.
> >
>
> I have a feeling it's getting overly complicated, to the extent that
> it'll be hard to explain to users and reason about. I don't think
> there's a "perfect" solution for cases when the filter expression gives
> different answers for old/new row - it'll always be surprising for some
> users :-(
>
It is possible but OTOH, the three replication solutions (Debezium,
Oracle, IBM's InfoSphere Data Replication) which have this feature
seems to filter based on both old and new rows in one or another way.
Also, I am not sure if the simple approach of just filter based on the
new row is very clear because it can also confuse users in a way that
even if all the new rows matches the filters, they don't see anything
on the subscriber and in fact, that can cause a lot of network
overhead without any gain.
> So maybe the best thing is to stick to the simple approach already used
> e.g. by pglogical, which simply user the new row when available (insert,
> update) and old one for deletes.
>
> I think that behaves more or less sensibly and it's easy to explain.
>
Okay, if nothing better comes up, then we can fall back to this option.
> All the other things (e.g. turning UPDATE to INSERT, advanced conflict
> resolution etc.) will require a lot of other stuff,
>
I have not evaluated this yet but I think spending some time thinking
about turning Update to Insert/Delete (yesterday's suggestion by
Alvaro) might be worth especially as that seems to be followed by some
other replication solution as well.
>and I see them as
> improvements of this simple approach.
>
> >>> Maybe a second option is to have replication change any UPDATE into
> >>> either an INSERT or a DELETE, if the old or the new row do not pass the
> >>> filter, respectively. That way, the databases would remain consistent.
> >
> > Yeah, I think this is the best way to keep the data consistent.
> >
>
> It'd also require REPLICA IDENTITY FULL, which seems like it'd add a
> rather significant overhead.
>
Why? I think it would just need similar restrictions as we are
planning for Delete operation such that filter columns must be either
present in primary or replica identity columns.
--
With Regards,
Amit Kapila.
From | Date | Subject | |
---|---|---|---|
Next Message | Dilip Kumar | 2021-07-20 05:41:59 | Re: .ready and .done files considered harmful |
Previous Message | David Rowley | 2021-07-20 05:04:19 | Re: Speed up transaction completion faster after many relations are accessed in a transaction |