From: | Daniel Farina <daniel(at)heroku(dot)com> |
---|---|
To: | Jim Nasby <jim(at)nasby(dot)net> |
Cc: | cedric(at)2ndquadrant(dot)com, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc>, Andrew Dunstan <andrew(at)dunslane(dot)net>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Subject: | Re: Interesting post-mortem on a near disaster with git |
Date: | 2013-04-04 01:36:00 |
Message-ID: | CAAZKuFYorxEn5=4aigh-m_ag-nqsOjJoOJbFQNw77cOSQZ2Wiw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Apr 3, 2013 at 6:18 PM, Jim Nasby <jim(at)nasby(dot)net> wrote:
>> > What about rdiff-backup? I've set it up for personal use years ago
>>
>> > (via the handy open source bash script backupninja) years ago and it
>>
>> > has a pretty nice no-frills point-in-time, self-expiring, file-based
>>
>> > automatic backup program that works well with file synchronization
>>
>> > like rsync (I rdiff-backup to one disk and rsync the entire
>>
>> > rsync-backup output to another disk). I've enjoyed using it quite a
>>
>> > bit during my own personal-computer emergencies and thought the
>>
>> > maintenance required from me has been zero, and I have used it from
>>
>> > time to time to restore, proving it even works. Hardlinks can be used
>>
>> > to tag versions of a file-directory tree recursively relatively
>>
>> > compactly.
>>
>> >
>>
>> > It won't be as compact as a git-aware solution (since git tends to to
>>
>> > rewrite entire files, which will confuse file-based incremental
>>
>> > differential backup), but the amount of data we are talking about is
>>
>> > pretty small, and as far as a lowest-common-denominator tradeoff for
>>
>> > use in emergencies, I have to give it a lot of praise. The main
>>
>> > advantage it has here is it implements point-in-time recovery
>>
>> > operations that easy to use and actually seem to work. That said,
>>
>> > I've mostly done targeted recoveries rather than trying to recover
>>
>> > entire trees.
>>
>> I have the same set up, and same feedback.
>
>
> I had the same setup, but got tired of how rdiff-backup behaved when a
> backup was interrupted (very lengthy cleanup process). Since then I've
> switched to an rsync setup that does essentially the same thing as
> rdiff-backup (uses hardlinks between multiple copies).
>
> The only downside I'm aware of is that my rsync backups aren't guaranteed to
> be "consistent" (for however consistent a backup of an active FS would
> be...).
I forgot to add one more thing to my first mail, although it's very
important to my feeble recommendation: the problem is that blind
synchronization is a great way to propagate destruction.
rdiff-backup (but perhaps others, too) has a file/directory structure
that is, as far as I know, additive, and the pruning can be done
independently at different replicas that can have different
retention...and if done just right (I'm not sure about the case of
concurrent backups being taken) one can write a re-check that no files
are to be modified or deleted by the synchronization as a safeguard.
--
fdr
From | Date | Subject | |
---|---|---|---|
Next Message | Ants Aasma | 2013-04-04 01:37:33 | Re: Page replacement algorithm in buffer cache |
Previous Message | Brendan Jurd | 2013-04-04 01:35:23 | Re: [PATCH] Exorcise "zero-dimensional" arrays (Was: Re: Should array_length() Return NULL) |