From: | "Peter Koczan" <pjkoczan(at)gmail(dot)com> |
---|---|
To: | "Suresh Gupta VG" <suresh(dot)g(at)zensar(dot)com> |
Cc: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>, "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: Postgresql takes more time to update |
Date: | 2007-10-29 17:39:53 |
Message-ID: | 4544e0330710291039y25d05c0mf541b2ad18b042ec@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
> Here I need to know from you
>
> a) Is there any better way to do this other than the above?
>
That's more complicated than you need to make it. What I do is something
like this (in a shell):
pg_dumpall -h host1 -p 5432 | psql -h host2 -p 5432 template1
and then delete the data as necessary via psql afterward.
You may need to tweak that command based on your authentication scheme or
substitute 5432 for the port the server is listening on and to specify the
proper hosts.
> b) Do I need to stop all the applications which are accessing this
> live Production database? (Should I need to take an outage of my complete
> system)
>
For testing, no, pg_dump takes a snapshot of the database and it doesn't add
a whole lot of overhead (at least not when I do it). You may notice some
slowdown, but nothing that would truly affect performance.
When you actually want to switch to the new version, probably. You'll have
to figure out some way to make sure there are no updates on the database
before dump/restoring if you're concerned about making sure data is
consistent. There will be downtime in this case. You can either disable
outside connections via pg_hba.conf or move the server to a different port.
You should plan it out for when the time comes.
> c) Are there any core area where I need to take care of these
> activities?
>
I don't understand the question.
> d) I had 10 lacs of records on "pgsql 7.4.2". We are using "SunOS
> 5.9" version of Solaris machine. How much time it can take to take a dump
> of all the data.
>
That depends greatly on your hardware and your database design. For me,
dump/restoring 10 GB of data (which becomes 50 GB once all the indexes are
created) to decent but modest hardware takes about 4 hours.
> Please advice me to carry on this process successfully and safely with out
> any conflicts. Thanks in advance.
>
Test, figure out how to do things, plan, manage any possible downtime.
That's the process in a nutshell.
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Broersma Jr | 2007-10-29 17:53:49 | Re: Indexes for Foreign Keys? |
Previous Message | Jeff Larsen | 2007-10-29 17:39:03 | Indexes for Foreign Keys? |