Re: Large Database Restore

From: "Jim C(dot) Nasby" <decibel(at)decibel(dot)org>
To: Lee Keel <lee(dot)keel(at)uai(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Large Database Restore
Date: 2007-05-17 16:04:17
Message-ID: 20070517160417.GH14548@nasby.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, May 17, 2007 at 08:19:08AM -0500, Lee Keel wrote:
> I am restoring a 51GB backup file that has been running for almost 26 hours.
> There have been no errors and things are still working. I have turned fsync
> off, but that still did not speed things up. Can anyone provide me with the
> optimal settings for restoring a large database?

Well, fsync=off is the biggest one. The others to look at:

maintenance_work_mem = 1GB (I generally have problems setting it over
1GB, and I'm not sure it would add much benefit)
checkpoint_timeout = 1 hour
checkpoint_segments = huge (you want to reduce the frequency of
checkpoints... probably to at least less than every 20 minutes)

Finally, if you've got a multi-CPU machine, you might want to build all
the indexes and table constraints as a separate step, and run them
through perl or something so that you'll utilize more than just one CPU.

Of course the downside to all of this is that it would mean starting
over from scratch.

Ultimately though, once your database gets past a certain size, you
really want to be using PITR and not pg_dump as your main recovery
strategy.
--
Jim Nasby decibel(at)decibel(dot)org
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Hannes Dorbath 2007-05-17 16:11:47 Re: Memory settings, vm.overcommit, how to get it really safe?
Previous Message Scott Marlowe 2007-05-17 16:00:45 Re: Memory settings, vm.overcommit, how to get it really safe?