From: | Evan Bauer <evanbauer(at)mac(dot)com> |
---|---|
To: | Ron <ronljohnsonjr(at)gmail(dot)com> |
Cc: | pgsql-admin(at)lists(dot)postgresql(dot)org |
Subject: | Re: More efficient pg_restore method? |
Date: | 2018-08-28 16:57:59 |
Message-ID: | 1F9927D1-5C55-48B9-8C6D-ECE3F6BDA3ED@mac.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Ron,
A couple of starting questions:
What is the size and latency of the network pipe between the primary and backup servers?
What is the size of the database you need to restore?
Is there a reason not to do a network copy of the backup directory contents to the database server and run the pg_restore locally?
Cheers,
- Evan
Evan Bauer
eb(at)evanbauer(dot)com
+1 646 641 2973
Skype: evanbauer
> On Aug 28, 2018, at 12:48, Ron <ronljohnsonjr(at)gmail(dot)com> wrote:
>
>
> Pg 9.6.9 on Linux...
>
> Given a backup server storing a "format=directory" database backup, and a database server, should I:
>
> Option #1: run pg_restore on the backup server and "push" the data to the database server via port 5432, or
> Option #2: have the backup server serve the dump directory via NFS, and run pg_restore on the database server, pulling the data via nfs protocol?
>
> (It'll be a multi-threaded restore over a 10Gb pipe.)
>
> --
> Angular momentum makes the world go 'round.
>
From | Date | Subject | |
---|---|---|---|
Next Message | Ron | 2018-08-28 17:00:24 | Re: More efficient pg_restore method? |
Previous Message | wambacher | 2018-08-28 16:54:46 | Re: tuple concurrently updated |