Re: Strategy for moving a large DB to another machine with least possible down-time

From: Andy Colson <andy(at)squeakycode(dot)net>
To: Andreas Joseph Krogh <andreas(at)visena(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: Strategy for moving a large DB to another machine with least possible down-time
Date: 2014-09-21 18:42:25
Message-ID: 541F1C11.3040202@squeakycode.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 09/21/2014 06:36 AM, Andreas Joseph Krogh wrote:
> Hi all.
> PG-version: 9.3.5
> I have a DB large enough for it to be impractical to pg_dump/restore it (would require too much down-time for customer). Note that I'm noe able to move the whole cluster, only *one* DB in that cluster.
> What is the best way to perform such a move, can i use PITR, rsync + webl-replay magic, what else?
> Can Barman help with this, maybe?
> Thanks.
> --
> *Andreas Joseph Krogh*
> CTO / Partner - Visena AS
> Mobile: +47 909 56 963
> andreas(at)visena(dot)com <mailto:andreas(at)visena(dot)com>
> www.visena.com <https://www.visena.com>
> <https://www.visena.com>

I had a less big'sih table I wanted to move, but not everything else. I had a timestamp on the table I could use for "close enough to unique". I wrote a perl script that would dump 100K records at a time (ordered by the timestamp). It would dump records and then disconnect and sleep for 30 seconds'ish which kept usage low.

It took a while, but once it caught up, I changed the script to get the max(timestamp) from olddb and newdb and only copy the missing ones. I could keep them in sync this way until I was ready to switch over.

-Andy

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Evan Martin 2014-09-21 19:29:24 Detecting query timeouts properly
Previous Message Andreas Joseph Krogh 2014-09-21 14:27:27 Re: Strategy for moving a large DB to another machine with least possible down-time