From: | snacktime <snacktime(at)gmail(dot)com> |
---|---|
To: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | minimizing downtime when upgrading |
Date: | 2006-06-16 05:08:16 |
Message-ID: | 1f060c4c0606152208r3567d85q5f4b19ffcad8ce44@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Anyone have any tips for minimizing downtime when upgrading? So far
we have done upgrades during scheduled downtimes. Now we are getting
to the point where the time required for a standard dump/restore is
just too long. What have others done when downtime is critical? The
only solution we have been able to come up with is to migrate the data
on a per user basis to a new database server. Each user is a
merchant, and the data in the database is order data. Migrating one
merchant at a time will keep the downtime per merchant limited to just
the time it takes to migrate the data for that merchant, which is
acceptable.
Any other ideas?
From | Date | Subject | |
---|---|---|---|
Next Message | Wayne Conrad | 2006-06-16 05:33:24 | Re: table has a many to many relationship with itself ... ? |
Previous Message | Patrick TJ McPhee | 2006-06-16 04:37:06 | Re: Performance Question |