From: | "Mark Woodward" <pgsql(at)mohawksoft(dot)com> |
---|---|
To: | "Martijn van Oosterhout" <kleptog(at)svana(dot)org> |
Cc: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Upgrading a database dump/restore |
Date: | 2006-10-09 16:30:13 |
Message-ID: | 16542.24.91.171.78.1160411413.squirrel@mail.mohawksoft.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> On Mon, Oct 09, 2006 at 11:50:10AM -0400, Mark Woodward wrote:
>> > That one is easy: there are no rules. We already know how to deal
>> with
>> > catalog restructurings --- you do the equivalent of a pg_dump -s and
>> > reload. Any proposed pg_upgrade that can't cope with this will be
>> > rejected out of hand, because that technology was already proven five
>> > years ago.
>
> <snip>
>
>> Dumping out a database is bad enough, but that's only the data, and that
>> can takes (mostly) only hours. Recreating a large database with complex
>> indexes can take days or hours for the data, hours per index, it adds
>> up.
>
> I think you missed the point of the email you replied to. *catalog*
> changes are quick and (relativly) easy. Even with 10,000 tables, it
> would only take a few moments to rewrite the entire catalog to a new
> version.
>
>> If it is a data format issue, maybe there should be a forum for a "next
>> gen" version of the current data layout that is extensible without
>> restructuring. This is not something that a couple people can go off and
>> do and submit a patch, it is something that has to be supported and
>> promoted from the core team, otherwise it won't happen. We all know
>> that.
>
> Actually, the data format is not the issue either. The tuple structure
> hasn't changed that often. What has changed is the internal format of a
> few types, but postgresql could support both the old and the new types
> simultaneously. There has already been a statement from core-members
> that if someone comes up with a tool to handle the catalog upgrade,
> they'd be willing to keep code from older types around with the
> original oid so they'd be able to read the older version.
That's good to know.
>
>> The question is whether or not you all think it is worth doing. I've
>> done
>> consulting work for some very large companies that everyone has heard
>> of.
>> These sorts of things matter.
>
> People are working it, someone even got so far as dealing with most
> catalog upgrades. The hard part going to be making sure that even if
> the power fails halfway through an upgrade that your data will still be
> readable...
Well, I think that any *real* DBA understands and accepts that issues like
power failure and hardware failure create situations where "suboptimal"
conditions exist. :-) Stopping the database and copying the pg directory
addresses this problem, upon failure, it is a simple mv bkdir pgdir, gets
you started again.
If you have a system on a good UPS and on reliable hardware, which is
exactly the sort of deployment that would benefit most from an "in place"
upgrade. There is no universal panacea where there is 0 risk, one can only
mitigate risk.
That being said, it should be the "preferred" method of upgrade with new
versions not being released untill they can migrate cleanly. A
dump/restore should be a last resort. Don't you think?
From | Date | Subject | |
---|---|---|---|
Next Message | Teodor Sigaev | 2006-10-09 16:32:42 | Re: Backbranch releases and Win32 locking |
Previous Message | Tom Lane | 2006-10-09 16:27:21 | Re: Upgrading a database dump/restore |