From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Tatsuo Ishii <ishii(at)postgresql(dot)org> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Implementing incremental backup |
Date: | 2013-06-19 23:43:24 |
Message-ID: | 20130619234324.GZ23363@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
* Tatsuo Ishii (ishii(at)postgresql(dot)org) wrote:
> I don't think using rsync (or tar or whatever general file utils)
> against TB database for incremental backup is practical. If it's
> practical, I would never propose my idea.
You could use rsync for incremental updates if you wanted, it'd
certainly be faster in some cases and it's entirely possible to use such
against TB databases in some cases.
> > They're not WAL'd, so expecting them to work when restoring a backup of
> > a PG that had been running at the time of the backup is folly.
>
> Probably you forget about our nice pg_dump tool:-)
I don't consider pg_dump a mechanism for backing up TB databases.
You're certainly welcome to use it for dumping unlogged tables, but I
can't support the notion that unlogged tables should be supported
through WAL-supported file-based backups. If we're going down this
road, I'd much rather see support for exporting whole files from and
importing them back into PG in some way which completely avoids the need
to re-parse or re-validate data and supports pulling in indexes as part
of the import.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2013-06-19 23:55:46 | Re: Implementing incremental backup |
Previous Message | Tatsuo Ishii | 2013-06-19 23:40:42 | Re: Implementing incremental backup |