From: | PFC <lists(at)peufeu(dot)com> |
---|---|
To: | "Jesper Krogh" <jesper(at)krogh(dot)cc>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Restore performance? |
Date: | 2006-04-10 17:20:33 |
Message-ID: | op.s7sygjtecigqcu@apollo13 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> I'd run pg_dump | gzip > sqldump.gz on the old system.
If the source and destination databases are on different machines, you
can pipe pg_dump on the source machine to pg_restore on the destination
machine by using netcat.
If you only have 100 Mbps ethernet, compressing the data will be faster.
If you have Gb Ethernet, maybe you don't need to compress, but it doesn't
hurt to test.
use pg_restore instead of psql, and use a recent version of pg_dump which
can generate dumps in the latest format.
If you need fast compression, use gzip -1 or even lzop, which is
incredibly fast.
Turn off fsync during the restore and set maintenance_work_mem to use
most of your available RAM for index creation.
I think that creating foreign key constraints uses large joins ; it might
be good to up work_mem also.
Check the speed of your disks with dd beforehand. You might get a
surprise.
Maybe you can also play with the bgwriter and checkpoint parameters.
From | Date | Subject | |
---|---|---|---|
Next Message | PFC | 2006-04-10 17:26:13 | Re: Better index stategy for many fields with few values |
Previous Message | Oscar Picasso | 2006-04-10 16:58:57 | Better index stategy for many fields with few values |