From: | "ktm(at)rice(dot)edu" <ktm(at)rice(dot)edu> |
---|---|
To: | Evgeny Shishkin <itparanoia(at)gmail(dot)com> |
Cc: | Jeison Bedoya <jeisonb(at)audifarma(dot)com(dot)co>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: performance database for backup/restore |
Date: | 2013-05-21 15:46:13 |
Message-ID: | 20130521154613.GE12507@aart.rice.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, May 21, 2013 at 05:28:31PM +0400, Evgeny Shishkin wrote:
>
> On May 21, 2013, at 5:18 PM, Jeison Bedoya <jeisonb(at)audifarma(dot)com(dot)co> wrote:
>
> > Hi people, i have a database with 400GB running in a server with 128Gb RAM, and 32 cores, and storage over SAN with fiberchannel, the problem is when i go to do a backup whit pg_dumpall take a lot of 5 hours, next i do a restore and take a lot of 17 hours, that is a normal time for that process in that machine? or i can do something to optimize the process of backup/restore.
> >
>
> I'd recommend you to dump with
>
> pg_dump --format=c
>
> It will compress the output and later you can restore it in parallel with
>
> pg_restore -j 32 (for example)
>
> Right now you can not dump in parallel, wait for 9.3 release. Or may be someone will back port it to 9.2 pg_dump.
>
> Also during restore you can speed up a little more by disabling fsync and synchronous_commit.
>
If you have the space and I/O capacity, avoiding the compress option will be
much faster. The current compression scheme using zlib type compression is
very CPU intensive and limits your dump rate. On a system that we have, a
dump without compression takes 20m and with compression 2h20m. The parallel
restore make a big difference as well.
Regards,
Ken
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Janes | 2013-05-21 16:11:40 | Re: performance database for backup/restore |
Previous Message | Ross Reedstrom | 2013-05-21 15:22:42 | Re: Cost of opening and closing an empty transaction |