From: | Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com> |
---|---|
To: | Ron <ronljohnsonjr(at)gmail(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: pg_dump to a remote server |
Date: | 2018-04-17 00:18:02 |
Message-ID: | 1fa1fafe-d814-066e-385e-77a2766c050d@aklaver.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 04/16/2018 04:58 PM, Ron wrote:
> We're upgrading from v8.4 to 9.6 on a new VM in a different DC. The
> dump file will be more than 1TB, and there's not enough disk space on
> the current system for the dump file.
>
> Thus, how can I send the pg_dump file directly to the new server while
> the pg_dump command is running? NFS is one method, but are there others
> (netcat, rsync)? Since it's within the same company, encryption is not
> required.
Maybe?:
pg_dump -d test -U postgres -Fc | ssh aklaver(at)arkansas 'cat > test_cat.out'
>
> Or would it be better to install both 8.4 and 9.6 on the new server (can
> I even install 8.4 on RHEL 6.9?), rsync the live database across and
> then set up log shipping, and when it's time to cut over, do an in-place
> pg_upgrade?
>
> (Because this is a batch system, we can apply the data input files to
> bring the new database up to "equality" with the 8.4 production system.)
>
> Thanks
>
--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Nolan | 2018-04-17 00:21:45 | Re: pg_dump to a remote server |
Previous Message | Ron | 2018-04-16 23:58:47 | pg_dump to a remote server |