From: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
---|---|
To: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov> |
Cc: | Michael Tharp <gxti(at)partiallystapled(dot)com>, Simon Riggs <simon(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Differential backup |
Date: | 2010-04-27 16:14:30 |
Message-ID: | u2sb42b73151004270914v1e48ac76vdf15cd0fb03ed16e@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Apr 27, 2010 at 11:13 AM, Kevin Grittner
<Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
> Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
>
>> The proposal only seems a win to me if a fair percentage of the
>> larger files don't change, which strikes me as a relatively low
>> level case to optimize for.
>
> That's certainly a situation we face, with a relatively slow WAN in
> the middle.
>
> http://archives.postgresql.org/pgsql-admin/2009-07/msg00071.php
>
> I don't know how rare or common that is.
hm...interesting read. pretty clever. Your archiving requirements are high.
With the new stuff (HS/SR) taken into consideration, would you have
done your DR the same way if you had to do it all over again?
Part of my concern here is that manual filesystem level backups are
going to become an increasingly arcane method of doing things as the
HS/SR train starts leaving the station.
hm, it would be pretty neat to see some of the things you do pushed
into logical (pg_dump) style backups...with some enhancements so that
it can skip tables haven't changed and are exhibited in a previously
supplied dump. This is more complicated but maybe more useful for a
broader audience?
Side question: is it impractical to backup via pg_dump a hot standby
because of query conflict issues?
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2010-04-27 16:24:17 | Re: Differential backup |
Previous Message | Simon Riggs | 2010-04-27 16:08:07 | Re: Wierd quirk of HS/SR, probably not fixable |