From: | Carl Karsten <carl(at)personnelware(dot)com> |
---|---|
To: | Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> |
Cc: | Martin Mueller <martinmueller(at)northwestern(dot)edu>, "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: a back up question |
Date: | 2017-12-05 22:51:28 |
Message-ID: | CADmzSShOuFvNJ9qGp3tiNpYJ-Gews1AfeM-D+cdR9m7Ryhub9Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, Dec 5, 2017 at 4:15 PM, Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
wrote:
> Carl Karsten wrote:
> > Nothing wrong with lots of tables and data.
> >
> > Don't impose any constraints on your problem you don't need to.
> >
> > Like what are you backing up to? $400 for a 1T ssd or $80 fo a 2T usb3
> > spinny disk.
> >
> > If you are backing up while the db is being updated, you need to make
> sure
> > updates are queued until the backup is done. don't mess with that
> > process. personally I would assume the db is always being updated and
> > expect that.
>
> A backup generated by pg_dump never includes writes that are in flight
> while the backup is being taken. That would make the backup absolutely
> worthless!
>
Hmm, i kinda glossed over my point:
if you come up with your own process to chop up the backup into little
pieces, you risk letting writes in, and then yeah, worthless.
--
Carl K
From | Date | Subject | |
---|---|---|---|
Next Message | John R Pierce | 2017-12-05 22:55:41 | Re: a back up question |
Previous Message | David G. Johnston | 2017-12-05 22:20:34 | Re: a back up question |