Re: Backup Strategy Advise

From: Vick Khera <vivek(at)khera(dot)org>
To: David Gauthier <davegauthierpg(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Backup Strategy Advise
Date: 2018-04-24 18:05:34
Message-ID: CALd+dccWy9tbQt9tUfUmJw-VZimp5KKc4PaNSnKochqdD95_aw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, Apr 24, 2018 at 10:50 AM, David Gauthier <davegauthierpg(at)gmail(dot)com>
wrote:

> Typically, I would think doing a weekly full backup, daily incremental
> backups and turn on journaling to capture what goes on since the last
> backup.
>

This is almost the whole concept of the streaming replication built into
postgres, except you are not applying the stream but archiving it. If you
have atomic file system snapshots, you can implement this strategy along
the lines of marking the DB snapshot for binary backup, snapshot the file
system, then copy that snapshot file system off to another system (locally
or off-site), meanwhile you accumulate the log files just as you would for
streaming replication. Once the copy is done, you can release the file
system snapshot and continue to archive the logs similarly to how you would
send them to a remote system for being applied. You just don't apply them
until you need to do the recovery.

Or just set up streaming replication to a hot-standby, because that's the
right thing to do. For over a decade I did this with twin servers and
slony1 replication. The cost of the duplicate hardware was nothing compared
to not having downtime.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Laurenz Albe 2018-04-24 20:00:47 Re: Using the public schema
Previous Message Adrian Klaver 2018-04-24 15:37:21 Re: Backup Strategy Advise