From: | Raghavendra <raghavendra(dot)rao(at)enterprisedb(dot)com> |
---|---|
To: | bhanu udaya <udayabhanu1984(at)hotmail(dot)com> |
Cc: | Magnus Hagander <magnus(at)hagander(dot)net>, François Beausoleil <francois(at)teksol(dot)info>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: pg_Restore |
Date: | 2013-01-21 10:11:18 |
Message-ID: | CA+h6AhguyFqM2ojtWmoNqcZ157Y1kMwmCfc2sdUTrutHYeewzQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgadmin-support pgsql-general |
On Mon, Jan 21, 2013 at 3:01 PM, bhanu udaya <udayabhanu1984(at)hotmail(dot)com>wrote:
> Hello,
> Greetings !
> I tried with all the below options. It approximatly takes 1 hour 30
> minutes for restoring a 9GB database. This much time can not be affordable
> as the execution of test cases take only 10% of this whole time and waiting
> 1 hour 30 minutes after every test case execution is alot for the
> team. Kindly let me know if we can reduce the database restoration time .
>
>
On linux, below settings work well using using -j option of pg_restore.
Since its windows, give another try with below option.(as already best
suggested in this email).
shared_buffers= 1024MB
work_mem= 512MB
maintenance_work_mem = 1GB
checkpoint_segments=(in between 128 - 256)
checkpoint_timeout=(default is 15mns make to 1h)
autovacuum=off
track_counts=off
fsync=off
full_page_writes=off
synchronous_commit=off
bgwriter_delay=(default 200ms, change to 50ms)
---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/
>
> Thanks and Regards
> Radha Krishna
>
> ------------------------------
> Date: Mon, 21 Jan 2013 08:15:47 +0100
> Subject: Re: [GENERAL] pg_Restore
> From: magnus(at)hagander(dot)net
> To: udayabhanu1984(at)hotmail(dot)com
> CC: francois(at)teksol(dot)info; pgsql-general(at)postgresql(dot)org
>
>
>
> On Jan 21, 2013 7:17 AM, "bhanu udaya" <udayabhanu1984(at)hotmail(dot)com> wrote:
> >
> >
> > Hello,
> > Greetings !
> > Thank you for the prompt reply. I have changed the settings as listed
> below:
> > > > shared_buffers = 1024MB
> > > > work_mem = 512MB
> > > > maintenance_work_mem = 512MB
> > > > wal_buffers = 100MB
> > > fsync = off # ONLY DURING INITIAL DATA LOAD!
> > > checkpoint_segments = 128 # large value, such as 128 or 256 (16MB per
> file, check disk space)
> > > checkpoint_timeout = 30min
> > > checkpoint_completion_target = 0.9
> > > wal_level = minimal # You'll need to do a full base backup if you use
> this
> > But, have same problem. It is almost 1 hour now, the restoration is
> still going on. After every test case execution, we would like to refresh
> the database and expected refresh should be completed in less than 10
> minutes. Is this achievable with the kind of configuration I have listed in
> my earlier email.
> >
> > Kindly help , as how to speed up this restoration process.
> >
>
> Try running pg_restore with the -1 option. If that doesn't help, try -m4
> or something like that (you'll have to remove the first option then, can't
> use both at once)
>
> But it's going to be pushing it anyway. Your scenario is going to create
> thousands of files (assuming you have multiple tables in each of your
> schemas as is normal), and that's just not something ntfs does very fast.
> Once the files are there, I bet loading the data is reasonably fast since
> it can't be all that big....
>
> /Magnus
>
From | Date | Subject | |
---|---|---|---|
Next Message | Michal Kozusznik | 2013-01-21 10:50:33 | Re: Editable resultset |
Previous Message | Guy Incognito | 2013-01-21 10:04:31 | hint contents blank |
From | Date | Subject | |
---|---|---|---|
Next Message | Tim Uckun | 2013-01-21 10:30:06 | Re: Running update in chunks? |
Previous Message | bhanu udaya | 2013-01-21 09:31:04 | Re: pg_Restore |