From: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "James B(dot) Byrne" <byrnejb(at)harte-lyne(dot)ca>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Backup using GiT? |
Date: | 2008-06-13 20:11:33 |
Message-ID: | 20080613201133.GC5070@alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Tom Lane wrote:
> "James B. Byrne" <byrnejb(at)harte-lyne(dot)ca> writes:
> > GiT works by compressing deltas of the contents of successive versions of file
> > systems under repository control. It treats binary objects as just another
> > object under control. The question is, are successive (compressed) dumps of
> > an altered database sufficiently similar to make the deltas small enough to
> > warrant this approach?
>
> No. If you compress it, you can be pretty certain that the output will
> be different from the first point of difference to the end of the file.
> You'd have to work on uncompressed output, which might cost more than
> you'd end up saving ...
The other problem is that since the tables are not dumped in any
consistent order, it's pretty unlikely that you'd get any similarity
between two dumps of the same table. To get any benefit, you'd need to
get pg_dump to dump sorted tuples.
--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
From | Date | Subject | |
---|---|---|---|
Next Message | Kynn Jones | 2008-06-13 22:08:20 | Advice for "hot-swapping" databases |
Previous Message | Tom Lane | 2008-06-13 19:56:44 | Re: dblink() cursor error/issue (TopMemoryContext) |