From: | Stuart Bishop <stuart(at)stuartbishop(dot)net> |
---|---|
To: | lcohan(at)web(dot)com |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Postgres backup solution |
Date: | 2017-03-15 09:26:49 |
Message-ID: | CADmi=6NdC0+k2zKqH9CY-40igX4p4M_KVXDNsfYEsuZtK48ooQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 15 March 2017 at 03:04, John McKown <john(dot)archie(dot)mckown(at)gmail(dot)com> wrote:
> Your message is not diplaying. At least not for me. I guess that my reader
> does not understand the "smime.p7m" file, which shows as an attachment. For
> others, his question is:
>
> === original question from Lawrence Cohan ===
>
> Yes, this is what I intended to ask:
>
> What would be a recommended solution for backing up a very large Postgres
> (~13TeraBytes) database in order to prevent from data deletion/corruption.
> Current setup is only to backup/restore to a standby read-only Postgres
> server
> via AWS S3 using wal-e however this does not offer the comfort of keeping a
> full backup available in case we need to restore some deleted or corrupted
> data.
'wal-e backup-push' will store a complete backup in S3, which can be
restored using 'wal-e backup-fetch'. And since you are already using
wal-e for log shipping, you get full PITR available.
pg_dump for a logical backup is also a possibility, although with 13TB
you probably don't want to hold a transaction open that long and are
better off with wal-e, barman or other binary backup tool.
--
Stuart Bishop <stuart(at)stuartbishop(dot)net>
http://www.stuartbishop.net/
From | Date | Subject | |
---|---|---|---|
Next Message | David G. Johnston | 2017-03-15 17:33:39 | Re: UPDATE ... ON CONFLICT DO NOTHING |
Previous Message | Andreas Kretschmer | 2017-03-15 06:50:58 | Re: controlled switchover with repmgr |