From: | Alan Hodgson <ahodgson(at)simkin(dot)ca> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Is it possible to "pip" pg_dump output into new db ? |
Date: | 2014-03-25 16:19:15 |
Message-ID: | 1712698.gmrIz94h6M@skynet.simkin.ca |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tuesday, March 25, 2014 02:56:48 PM Frank Foerster wrote:
> Hi,
>
> we are currently in the process of upgrading a production/live 1 TB
> database from 9.2 to 9.3 via pg_dump, which is quite a lengthy process.
>
> Fortunately we have a capable spare-server so we can restore into a clean,
> freshly setup machine.
>
> I just wondered wether the intermediate step of writing the dump-file and
> re-reading it to have it written to the database is really necessary. Is
> there any way to "pipe" the dump-file directly into the new
> database-process or would such functionality make sense ?
>
> I can only speak for us, but each time we do a dump/restore we need to
> extract/copy/move very large files and piping directly into something like
> psql/pg_restore on another machine etc. would greatly reduce
> upgrade-time/pain.
>
> Thanks and best regards,
>
> Frank
Sure. For maximum speed, something like:
pg_dump [options] source_db | pigz - | ssh -e none user(at)target "gunzip - |
psql [options] target_db"
Depending on your hardware, though, doing a custom backup to a target file and
then using it for a parallel restore would probably overall end up being
faster, plus you get to keep the backup if needed. In my experience, the
restore is a lot slower than the backup.
Slony is also great, to save most of the downtime. At the expense of a lot of
setup and testing time.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2014-03-25 16:19:52 | Re: Failure upgrading PG 9.2 to 9.3 |
Previous Message | Adrian Klaver | 2014-03-25 15:50:21 | Re: Failure upgrading PG 9.2 to 9.3 |