Re: Logical replication and pg_dump for out of band synchronization

From: Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>
To: Joseph Hammerman <joe(dot)hammerman(at)datadoghq(dot)com>, pgsql-admin(at)lists(dot)postgresql(dot)org
Subject: Re: Logical replication and pg_dump for out of band synchronization
Date: 2022-07-06 09:21:31
Message-ID: 4b3b8dae-fcd6-eb0f-c29f-f27a98a8d5b1@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

On 04.07.22 22:40, Joseph Hammerman wrote:
> We have been trying to use Logical Replication to synchronize some
> oversized tables (1Tb+).
> We are replicating from 9.6 -> 11.x. The long sync times for the initial
> snapshots of these large tables  have been causing incidents however,
> since autovacuum cannot clean up anything older than the xmin horizon.

What replication system are you using with PG 9.6? If you are using
pglogical, then it contains a program pglogical_create_subscriber that
addresses this.

> We then intend to take a dump, restore the table, and play back
> the changes by enabling the Subscription. This way the bulk data
> transfer is out of band from the production application. Our testing
> shows that this works cleanly, and that new changes replicate correctly
> to the target relations. Additionally, pg_dump has a —snapshot flag
> which it appears was added to support this sort of workflow.

Yes, this would also be a valid solution.

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Wells Oliver 2022-07-09 19:37:44 Storing large large JSON objects in JSONB
Previous Message Joseph Hammerman 2022-07-04 20:40:58 Logical replication and pg_dump for out of band synchronization