From: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> |
---|---|
To: | Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> |
Cc: | Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Interval for launching the table sync worker |
Date: | 2017-04-21 14:19:34 |
Message-ID: | CAD21AoC3tsaQ9SNEy=ZHvGms61HQ=MjGozwM+_uq+uG=xMqg1g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Apr 21, 2017 at 5:33 PM, Kyotaro HORIGUCHI
<horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> wrote:
> Hello,
>
> At Thu, 20 Apr 2017 13:21:14 +0900, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote in <CAD21AoDrw0OaHE=oVRRhQX248kjJ7W+1ViM3K76aP46HnHJsnQ(at)mail(dot)gmail(dot)com>
>> On Thu, Apr 20, 2017 at 12:30 AM, Petr Jelinek
>> <petr(dot)jelinek(at)2ndquadrant(dot)com> wrote:
>> > On 19/04/17 15:57, Masahiko Sawada wrote:
>> >> On Wed, Apr 19, 2017 at 10:07 PM, Petr Jelinek
>> >> <petr(dot)jelinek(at)2ndquadrant(dot)com> wrote:
>> >>> On 19/04/17 14:42, Masahiko Sawada wrote:
>> >>>> On Wed, Apr 19, 2017 at 5:12 PM, Kyotaro HORIGUCHI
>> >>>> <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> wrote:
>> >>>>> At Tue, 18 Apr 2017 18:40:56 +0200, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com> wrote in <f64d87d1-bef3-5e3e-a999-ba302816a0ee(at)2ndquadrant(dot)com>
>> >>>>>> On 18/04/17 18:14, Peter Eisentraut wrote:
>> >>>>>>> On 4/18/17 11:59, Petr Jelinek wrote:
>> >>>>>>>> Hmm if we create hashtable for this, I'd say create hashtable for the
>> >>>>>>>> whole table_states then. The reason why it's list now was that it seemed
>> >>>>>>>> unnecessary to have hashtable when it will be empty almost always but
>> >>>>>>>> there is no need to have both hashtable + list IMHO.
>> >>>>>
>> >>>>> I understant that but I also don't like the frequent palloc/pfree
>> >>>>> in long-lasting context and double loop like Peter.
>> >>>>>
>> >>>>>>> The difference is that we blow away the list of states when the catalog
>> >>>>>>> changes, but we keep the hash table with the start times around. We
>> >>>>>>> need two things with different life times.
>> >>>>>
>> >>>>> On the other hand, hash seems overdone. Addition to that, the
>> >>>>> hash-version leaks stale entries while subscriptions are
>> >>>>> modified. But vacuuming them costs high.
>> >>>>>
>> >>>>>> Why can't we just update the hashtable based on the catalog? I mean once
>> >>>>>> the record is not needed in the list, the table has been synced so there
>> >>>>>> is no need for the timestamp either since we'll not try to start the
>> >>>>>> worker again.
>> >>>>
>> >>>> I guess the table sync worker for the same table could need to be
>> >>>> started again. For example, please image a case where the table
>> >>>> belonging to the publication is removed from it and the corresponding
>> >>>> subscription is refreshed, and then the table is added to it again. We
>> >>>> have the record of the table with timestamp in the hash table when the
>> >>>> table sync in the first time, but the table sync after refreshed could
>> >>>> have to wait for the interval.
>> >>>>
>> >>>
>> >>> But why do we want to wait in such case where user has explicitly
>> >>> requested refresh?
>> >>>
>> >>
>> >> Yeah, sorry, I meant that we don't want to wait but cannot launch the
>> >> tablesync worker in such case.
>> >>
>> >> But after more thought, the latest patch Peter proposed has the same
>> >> problem. Perhaps we need to scan always whole pg_subscription_rel and
>> >> remove the entry if the corresponding table get synced.
>> >>
>> >
>> > Yes that's what I mean by "Why can't we just update the hashtable based
>> > on the catalog". And if we do that then I don't understand why do we
>> > need both hastable and linked list if we need to update both based on
>> > catalog reads anyway.
>>
>> Thanks, I've now understood correctly. Yes, I think you're right. If
>> we update the hash table based on the catalog whenever table state is
>> invalidated, we don't need to have both hash table and list.
>
> Ah, ok. The patch from Peter still generating and replacing the
> content of the list. The attached patch stores everything into
> SubscriptionRelState. Oppositte to my anticiation, the hash can
> be efectively kept small and removed.
>
Thank you for the patch!
Actually, I also bumped into the same the situation where we got the
following error during hash_seq_search. I guess we cannot commit a
transaction during hash_seq_scan but the sequential scan loop in
process_syncing_tables_for_apply could attempt to do that.
2017-04-21 21:35:22.587 JST [7508] WARNING: leaked hash_seq_search
scan for hash table 0x1f54980
2017-04-21 21:35:22.587 JST [7508] ERROR: no hash_seq_search scan for
hash table "Logical replication table sync worker start times"
Regards,
--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2017-04-21 14:23:07 | Re: tablesync patch broke the assumption that logical rep depends on? |
Previous Message | Peter Eisentraut | 2017-04-21 14:15:20 | Re: subscription worker doesn't start immediately on eabled |