Upgrading Postgres large databases with blobs

From: "CAJ CAJ" <pguser(at)gmail(dot)com>
To: "postgres general" <pgsql-general(at)postgresql(dot)org>
Subject: Upgrading Postgres large databases with blobs
Date: 2007-03-10 19:40:21
Message-ID: 467669b30703101140l6c572699ocba6e830f1fc56c8@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hello,

For some reason, my first attempt to send this email to the list didn't get
through ....

We have several independent database servers with ~50GB+ databases running
postgres 8.0.x. We are planning to upgrade these databases to postgres
8.2.xover the weekend

We plan to use the following steps to upgrade each server,

1. Dump the 8.0.x database cluster using 8.2.x pg_dumpall
% ./pg_dumpall > pgdumpall_backup.sql

2.Dump the 8.0.x database including large objects in compressed custom
format using 8.2.x pg_dump
% ./pg_dump -Fc -b -Z9 dbname > pgdump_lobs_backup

Restoring database
1. Initialize 8.2.x darabase
% initdb -D /data/pgdata

2. Restore template1 database from cluster dump
% ./psql -d template1 < pgdumpall_backup.sql

3. Delete database dbname else restoring will give error about existing
dbname
% dropdb dbname

4. Create fresh dbname
% createdb -O dbowner dbname

5. Restore database with lobs
% ./pg_restore -v -Fc -d dbname -e -U dbowner < pgdumpall_lobs_backup

Some of the problems we have are,
1. We are not sure if all of the data will be available after dump/restore
with above process
2. The dump and restore process is very very slow to be complete over the
weekend (takes approx 1GB/hr to dump on a dual G5 PPC 2Ghz with 1GB RAM and
RAID 1 disks)

What is the fastest way to upgrade postgres for large databases that has
binary objects?

Thanks for all your help.

Browse pgsql-general by date

  From Date Subject
Next Message Alvaro Herrera 2007-03-10 20:14:33 Re: HIPPA (was Re: Anyone know ...)
Previous Message David Legault 2007-03-10 19:13:00 pl/pgsql FOR LOOP with function