Re: pglogical performance for copying large table

From: srinivas oguri <srinivasoguri7(at)gmail(dot)com>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: pgsql-admin <pgsql-admin(at)postgresql(dot)org>
Subject: Re: pglogical performance for copying large table
Date: 2023-02-14 02:08:33
Message-ID: CADfH0ysdH6A8WgNPhT9FNrJjEsGZaNqtXd60jJ-J-mCuZKkA0g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

>> What does this mean in terms of parameters? Are all of them being used?

No, actually it is restricted to only one process which is running copy
command.

>> Pg_dump and pg_restore
Basically this is one of the database we have largest database which is of
20 TB. We would like to configure replication by which we will be able to
switch with less downtime.

Is it possible to configure the logical replication with pg_dump for
initial data copy ? Can you please help me with detailed steps.

On Tue, Feb 14, 2023, 4:07 AM Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:

>
>
> On Mon, Feb 13, 2023 at 1:30 PM srinivas oguri <srinivasoguri7(at)gmail(dot)com>
> wrote:
>
>>
>> I have set the parallel processes for logical replication as 12.
>>
>
> What does this mean in terms of parameters? Are all of them being used?
>
> Cheers,
>
> Jeff
>
>>

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Raj kumar 2023-02-16 18:45:39 Load 500 GB test data with Large objects and different types
Previous Message Jeff Janes 2023-02-13 22:37:28 Re: pglogical performance for copying large table