From: | Micah Yoder <yodermk(at)home(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: LARGE db dump/restore for upgrade question |
Date: | 2001-08-17 06:53:08 |
Message-ID: | 01081702530808.01102@eclipse |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> I am trying a pg_dump right now, and in the first 25 minutes it dumped
> 54Mb, which means that a full dump will take about 200 hours! I would
> guess the restore would take about the same amount of time, so I would be
> looking at 17 DAYS of downtime to upgrade! Maybe it will speed up later in
Thinking out loud here ...
Perhaps a slightly modified (or even custom written) pg_dump that dumps your
large tables incrementally. Since you said that the large tables only have
inserts, you should be able to run most of this while the DB is up. Have it
keep track of the timestamp of the last row dumped for each table. Then it
could continue from there, appending to the same file, next time you run it.
If you keep the dump, then you won't have to do another large pg_dump next
time you upgrade, assuming you never do updates on those tables.
Of course, you'll still have to wait a while for it to restore after every
upgrade. The other suggestion about fast hardware should help there.
If this doesn't make any sense, sorry, it's 3 in the morning....
> Not that I am complaining, postgres seems to handle this data volume quite
> well, and it is certainly worth very dollar I didn't pay for it. :)
What a ringing endorsement....
--
Like to travel? http://TravTalk.org
Micah Yoder Internet Development http://yoderdev.com
From | Date | Subject | |
---|---|---|---|
Next Message | Tille, Andreas | 2001-08-17 07:18:33 | Configuration of ODBC-driver |
Previous Message | Thomas Lockhart | 2001-08-17 05:17:05 | Re: Max number of tables in a db? |