Re: Moving large table between servers: logical replication or postgres_fdw

From: Rob Sargent <robjsargent(at)gmail(dot)com>
To: Rene Romero Benavides <rene(dot)romero(dot)b(at)gmail(dot)com>
Cc: rhys(dot)stewart(at)gmail(dot)com, pgsql-general(at)postgresql(dot)org
Subject: Re: Moving large table between servers: logical replication or postgres_fdw
Date: 2018-12-05 07:18:59
Message-ID: 50F5075B-680E-4A98-A133-40D684C2D53F@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

L

> On Dec 4, 2018, at 11:13 PM, Rene Romero Benavides <rene(dot)romero(dot)b(at)gmail(dot)com> wrote:
>
> I tend to believe that a backup (pg_dump) in custom format (-F c) using multiple jobs (parallel) -> restore (pg_restore) also with multiple concurrent jobs would be better.
>
>> Am Di., 4. Dez. 2018 um 21:14 Uhr schrieb Rhys A.D. Stewart <rhys(dot)stewart(at)gmail(dot)com>:
>> Greetings Folks,
>>
>> I have a relatively large table (100m rows) that I want to move to a
>> new box with more resources. The table isn't doing anything...i.e its
>> not being updated or read from. Which approach would be faster to move
>> the data over:
>>
>> a). Use pg_fdw and do "create local_table as select * from foreign_table".
>> b). setup logical replication between the two servers.
>>
>> Regards,
>>
>> Rhys
>> Peace & Love|Live Long & Prosper
>>
>
>
> --
> El genio es 1% inspiración y 99% transpiración.
> Thomas Alva Edison
> http://pglearn.blogspot.mx/
> L
>
Let’s compromise. Copy out as described. Tell the auditors where the file is. Skip the copy in.
If you truly don’t need the data online going forward this might actually pass muster.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Gavin Flower 2018-12-05 09:13:10 Re: simple division
Previous Message Rob Sargent 2018-12-05 07:07:21 Re: simple division