From: | Vick Khera <vivek(at)khera(dot)org> |
---|---|
To: | Martin Mueller <martinmueller(at)northwestern(dot)edu> |
Cc: | "karsten(dot)hilbert(at)gmx(dot)net" <karsten(dot)hilbert(at)gmx(dot)net>, "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: a back up question |
Date: | 2017-12-06 14:57:45 |
Message-ID: | CALd+dccXXahbFCCL7q6wPU+VWGELOdxCET-N=nOdi6Aiqa6Myw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Dec 6, 2017 at 7:52 AM, Martin Mueller <
martinmueller(at)northwestern(dot)edu> wrote:
>
> The objective is to create a backup from which I can restore any or all
> tables in the event of a crash. In my case, I use Postgres for my own
> scholarly purposes. Publications of whatever kind are not directly made
> public via the database. I am my only customer, and a service interruption,
> while a nuisance to me, does not create a crisis for others. I don’t want
> to lose my work, but a service interruption of a day or a week is no big
> deal.
>
I'd stick with pg_dump for sure. Two main choices depending on how big your
database is and how fast your disks are: 1) "c" format into a single flat
compressed file from which you can restore; 2) "d" format which you would
then subsequently need to compress and tar for easy tracking and off-site
copying. The only real advantage to "d" format is that you can parallelize
the dumps if you have enough spare I/O bandwidth.
For my backups on a production database serving thousands of customers per
day (mostly in the US) on a web app, I just did a "c" format pg_dump
nightly around 3am US Eastern time. It was our low time, and the impact on
the database server was not significant since it had more RAM than the size
of the database on disk (256GB RAM vs 100GB disk size including indexes).
The backups are on a different machine which connects via LAN to the DB
server and writes to its own local disk then copied that to an off-site
server. Before I had such beefy hardware, I would do the dump from a
replica which was updated using Slony1 software. The pg_dump backups were
for disaster recovery and customer error recovery, so I kept about 2 weeks'
worth of them.
Since you have no other consumers of your data, just use a simple "c"
format dump however often you like, then copy those off-site. Easy peasy.
From | Date | Subject | |
---|---|---|---|
Next Message | Magnus Hagander | 2017-12-06 15:00:21 | Re: a back up question |
Previous Message | Stephen Frost | 2017-12-06 14:57:15 | Re: a back up question |