From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Jeison Bedoya <jeisonb(at)audifarma(dot)com(dot)co> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: performance database for backup/restore |
Date: | 2013-05-21 16:11:40 |
Message-ID: | CAMkU=1zo9vtmm6X5XQxwDXPppzZnfxrc-eW3FHxS0WpcUzPX3g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
2013/5/21 Jeison Bedoya <jeisonb(at)audifarma(dot)com(dot)co>
> Hi people, i have a database with 400GB running in a server with 128Gb
> RAM, and 32 cores, and storage over SAN with fiberchannel, the problem is
> when i go to do a backup whit pg_dumpall take a lot of 5 hours, next i do a
> restore and take a lot of 17 hours, that is a normal time for that process
> in that machine? or i can do something to optimize the process of
> backup/restore.
>
How many database objects do you have? A few large objects will dump and
restore faster than a huge number of smallish objects.
Where is your bottleneck? "top" should show you whether it is CPU or IO.
I can pg_dump about 6GB/minute to /dev/null using all defaults with a small
number of large objects.
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Kasahara Tatsuhito | 2013-05-21 16:43:11 | Re: pg_statsinfo : error could not connect to repository |
Previous Message | ktm@rice.edu | 2013-05-21 15:46:13 | Re: performance database for backup/restore |