From: | Jesper Krogh <jesper(at)krogh(dot)cc> |
---|---|
To: | Rajesh Kumar Mallah <mallah(dot)rajesh(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Restore performance? |
Date: | 2006-04-10 15:54:47 |
Message-ID: | 443A7FC7.2000001@krogh.cc |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Rajesh Kumar Mallah wrote:
>> I'd run pg_dump | gzip > sqldump.gz on the old system. That took about
>> 30 hours and gave me an 90GB zipped file. Running
>> cat sqldump.gz | gunzip | psql
>> into the 8.1 database seems to take about the same time. Are there
>> any tricks I can use to speed this dump+restore process up?
>
>
> was the last restore successfull ?
> if so why do you want to repeat ?
"about the same time" == Estimated guess from restoring a few tables
I was running a testrun, without disabling updates to the production
database, the real run is scheduled for easter where there hopefully is
no users on the system. So I need to repeat, I'm just trying to get a
feelingabout how long time I need to allocate for the operation.
> 1. run new version of postgres in a different port and pipe pg_dump to psql
> this may save the CPU time of compression , there is no need for a temporary
> dump file.
>
> pg_dump | /path/to/psql813 -p 54XX newdb
I'll do that. It is a completely different machine anyway.
> 2. use new version of pg_dump to dump the old database as new version
> is supposed to be wiser.
Check.
> 3. make sure you are trapping the restore errors properly
> psql newdb 2>&1 | cat | tee err works for me.
Thats noted.
--
Jesper Krogh, jesper(at)krogh(dot)cc
From | Date | Subject | |
---|---|---|---|
Next Message | Rajesh Kumar Mallah | 2006-04-10 15:58:30 | Re: Takes too long to fetch the data from database |
Previous Message | Rajesh Kumar Mallah | 2006-04-10 15:42:02 | Re: Restore performance? |