From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Andres Martin del Campo Campos <andres(at)invisible(dot)email> |
Cc: | vignesh C <vignesh21(at)gmail(dot)com>, pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: BUG #18027: Logical replication taking forever |
Date: | 2023-07-22 05:25:53 |
Message-ID: | CAA4eK1JMoa3GaNRGTafP2RznW50vV5snVF0+nhRo85uV9PwTkA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Thu, Jul 20, 2023 at 10:46 PM Andres Martin del Campo Campos
<andres(at)invisible(dot)email> wrote:
> Seems like I don't have that table
>
>
> [image: image.png]
>
>
> There are no errors in the logs but I only see dead tuples and no live
> tuples
>
>
oh, can you show us the dead and live tuple count on both publisher and
subscriber? Ideally, COPY command should only copy the recent data based on
the snapshot. It shouldn't copy the old/dead rows. One possibility I could
think of is that due to some reason, if there is a failure during the
initial sync process, it will ROLLBACK the whole copy and restart it again.
So, that way one can see the table is growing with dead tuples and the copy
is never finished especially if such an error occurs repeatedly. If that
happens, you must see some error in the subscriber-side logs. Can you
ensure in some way that such a phenomenon is not happening in your case?
--
With Regards,
Amit Kapila.
From | Date | Subject | |
---|---|---|---|
Next Message | Alexander Lakhin | 2023-07-22 10:00:00 | Re: BUG #18031: Segmentation fault after deadlock within VACUUM's parallel worker |
Previous Message | Tom Lane | 2023-07-21 19:21:46 | Re: BUG #18014: Releasing catcache entries makes schema_to_xmlschema() fail when parallel workers are used |