From: | "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> |
---|---|
To: | "Michael Andreasen" <michael(at)dunlops(dot)com>, "Andy Colson" <andy(at)squeakycode(dot)net> |
Cc: | <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Fastest pq_restore? |
Date: | 2011-03-18 14:38:05 |
Message-ID: | 4D8327FD020000250003BA92@gw.wicourts.gov |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Andy Colson <andy(at)squeakycode(dot)net> wrote:
> On 03/17/2011 09:25 AM, Michael Andreasen wrote:
>> I've been looking around for information on doing a pg_restore as
>> fast as possible.
>> I am using a twin processor box with 2GB of memory
>> shared_buffers = 496MB
Probably about right.
>> maintenance_work_mem = 160MB
You might get a benefit from a bit more there; hard to say what's
best with so little RAM.
>> checkpoint_segments = 30
This one is hard to call without testing. Oddly, some machines do
better with the default of 3. Nobody knows why.
>> autovacuum = false
>> full_page_writes=false
Good.
> fsync = off
> synchronous_commit = off
Absolutely.
> bgwriter_lru_maxpages = 0
I hadn't thought much about that last one -- do you have benchmarks
to confirm that it helped with a bulk load?
You might want to set max_connections to something lower to free up
more RAM for caching, especially considering that you have so little
RAM.
-Kevin
From | Date | Subject | |
---|---|---|---|
Next Message | Claudio Freire | 2011-03-18 15:26:20 | Re: Disabling nested loops - worst case performance |
Previous Message | Kevin Grittner | 2011-03-18 14:28:02 | Re: Help: massive parallel update to the same table |