Re: Strategy for moving a large DB to another machine with least possible down-time

From: Bill Moran <wmoran(at)potentialtech(dot)com>
To: Andreas Joseph Krogh <andreas(at)visena(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Strategy for moving a large DB to another machine with least possible down-time
Date: 2014-09-21 11:51:00
Message-ID: 20140921075100.47e035df8cd27493bbf8ce9d@potentialtech.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Sun, 21 Sep 2014 13:36:18 +0200 (CEST)
Andreas Joseph Krogh <andreas(at)visena(dot)com> wrote:

> Hi all.   PG-version: 9.3.5   I have a DB large enough for it to be impractical
> to pg_dump/restore it (would require too much down-time for customer). Note
> that I'm noe able to move the whole cluster, only *one* DB in that cluster.  
> What is the best way to perform such a move, can i use PITR, rsync +
> webl-replay magic, what else? Can Barman help with this, maybe?   Thanks.   --

I've used Slony to do this kind of thing with great success in the past.

The biggest advantage of slony is that you can install it without stopping the
DB server, wait patiently while it takes however long is needed to synch up
the two servers without having much impact (if any) on operations, then switch
over when you're ready. The disadvantage to Slony is that the setup/config is
a bit involved.

--
Bill Moran
I need your help to succeed:
http://gamesbybill.com

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Lee Jason 2014-09-21 12:23:12 unique index on embedded json object
Previous Message Andreas Joseph Krogh 2014-09-21 11:36:18 Strategy for moving a large DB to another machine with least possible down-time