From: | Steve Atkins <steve(at)blighty(dot)com> |
---|---|
To: | Postgres general mailing list <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Long term database archival |
Date: | 2006-07-07 18:04:13 |
Message-ID: | 33571208-307F-4DFB-9BAF-54CE5D2A5291@blighty.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Jul 7, 2006, at 1:19 AM, Csaba Nagy wrote:
> On Thu, 2006-07-06 at 20:57, Karl O. Pinc wrote:
>> Hi,
>>
>> What is the best pg_dump format for long-term database
>> archival? That is, what format is most likely to
>> be able to be restored into a future PostgreSQL
>> cluster.
>
>> Should we want to restore a 20 year old backup
>> nobody's going to want to be messing around with
>> decoding a "custom" format dump if it does not
>> just load all by itself.
>
> Karl, I would say that if you really want data from 20 years ago, keep
> it in the custom format, along with a set of the sources of postgres
> which created the dump. then in 20 years when you'll need it, you'll
> compile the sources and load the data in the original postgres
> version... of course you might need to also keep an image of the
> current
> OS and the hardware you're running on if you really want to be sure it
> will work in 20 years :-)
I've been burned by someone doing that, and then being unable to
find a BCPL compiler.
So don't do that.
Store them in a nice, neutral ASCII format, along with all the
documentation. If you can't imagine extracting the
data with a small perl script and less than a days work today
then your successor will likely curse your name in 20 years
time.
Cheers,
Steve
From | Date | Subject | |
---|---|---|---|
Next Message | Ron Johnson | 2006-07-07 18:25:11 | Re: How to insert .xls files into database |
Previous Message | Sven Willenberger | 2006-07-07 17:52:58 | Re: VACUUM FULL versus CLUSTER ON |