Re: Long term database archival

From: Shane Ambler <pgsql(at)007Marketing(dot)com>
To: PostgreSQL Mailing lists <pgsql-general(at)postgresql(dot)org>
Subject: Re: Long term database archival
Date: 2006-07-07 09:45:47
Message-ID: C0D46363.462D4%pgsql@007Marketing.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 7/7/2006 17:49, "Csaba Nagy" <nagy(at)ecircle-ag(dot)com> wrote:

> On Thu, 2006-07-06 at 20:57, Karl O. Pinc wrote:
>> Hi,
>>
>> What is the best pg_dump format for long-term database
>> archival? That is, what format is most likely to
>> be able to be restored into a future PostgreSQL
>> cluster.
>
>> Should we want to restore a 20 year old backup
>> nobody's going to want to be messing around with
>> decoding a "custom" format dump if it does not
>> just load all by itself.
>
> Karl, I would say that if you really want data from 20 years ago, keep
> it in the custom format, along with a set of the sources of postgres
> which created the dump. then in 20 years when you'll need it, you'll
> compile the sources and load the data in the original postgres
> version... of course you might need to also keep an image of the current
> OS and the hardware you're running on if you really want to be sure it
> will work in 20 years :-)
>
> Cheers,
> Csaba.
>

Depending on the size of data (if it isn't too large) you could consider
creating a new database for archives, maybe even one for each year.

This can be on an old server or backup server instead of the production
one.

Unless the data is too large you can dump/restore the archive data to a new
pg version as you upgrade meaning the data will always be available and you
won't have any format issues when you want to retrieve the data.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Fabrice Franquenk 2006-07-07 09:49:13 WAL internals
Previous Message Thomas Kellerer 2006-07-07 09:26:37 Re: Postmaster is starting but shutting when trying to connect (Windows)