Re: Is it possible to "pip" pg_dump output into new db ?

From: Frank <fr667766(at)gmail(dot)com>
To: rod(at)iol(dot)ie
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Is it possible to "pip" pg_dump output into new db ?
Date: 2014-03-27 11:22:25
Message-ID: CAMLSce5SWdcizKOHYA7zByWgn8AMpXWc9=CdunEBUxzoPBQROg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Ray, Alan,

thanks for your replies.

We have tested the dump/restore procedure with a few smaller databases and
it worked fine. We had a few smaller hiccups with the large database as it
required a few modules and special tablespaces before it would start
actually copying data. But not a real problem, a test-upgrade is currently
running.

We will use this upgrade-cycle to systematically test and evaluate all
upgrade options.

We have an "if-all-else-fails"-full plain dump that we just restored
testwise. In this case to see performance but we regularly restore it to
make sure it actually works. The file is compressed with rar. Packed size
is about 100 GB, unpacked about 1 TB. Uncompressing alone on a decent
machine (256GB Ram, 12 Core) took about 1 day. Importing via psql took
about 12 hours (fsync off, wal_segments adjusted, etc.).

Currently we are running the direct pg_dump / pg_restore upgrade from a
slave that we just took off wal-replication. As it is progressing i am
expecting something in the 12-15 hour range.

Finally we will try the pg_upgrade-option on a test-slave. I expect that to
be quite fast as it more or less just needs to copy the data once and
correct/adjust system tables, if i am not mistaken. So that should take
about as long as it takes to copy 1 TB of data plus the table-adjustments.

Fortunately we can use the weekend to freeze the database so the
reduced-downtime that might be achieved by the slony-approach is not a true
requirement and we can avoid the complexities of that approach.

Thanks,

Frank

On Tue, Mar 25, 2014 at 4:46 PM, Raymond O'Donnell <rod(at)iol(dot)ie> wrote:

> On 25/03/2014 13:56, Frank Foerster wrote:
> >
> > Hi,
> >
> > we are currently in the process of upgrading a production/live 1 TB
> > database from 9.2 to 9.3 via pg_dump, which is quite a lengthy process.
> >
> > Fortunately we have a capable spare-server so we can restore into a
> > clean, freshly setup machine.
> >
> > I just wondered wether the intermediate step of writing the dump-file
> > and re-reading it to have it written to the database is really
> > necessary. Is there any way to "pipe" the dump-file directly into the
> > new database-process or would such functionality make sense ?
>
> Surely:
>
> pg_dump [...etc...] | psql [...etc...]
>
> Though I'm sure it will still take a long time for a database of that size.
>
> Another option to explore would be to use Slony, which can replicate
> databases between different Postgres versions - one of its design
> use-cases is to perform upgrades like this with a minimum of down-time.
> You can replicate the database over to the new server, and then
> switching need take only seconds once the new one is ready.
>
> Ray.
>
>
> --
> Raymond O'Donnell :: Galway :: Ireland
> rod(at)iol(dot)ie
>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Rémi Cura 2014-03-27 11:53:41 Re: Synchronizing a table that is two different databases : Need to dump a table a insert from db1 and change the insert statements into UPDATE statements
Previous Message jaime soler 2014-03-27 10:19:44 Re: A guide about some topics of Associate Certification