From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Zdenek Kotala <Zdenek(dot)Kotala(at)Sun(dot)COM> |
Cc: | Heikki Linnakangas <heikki(at)enterprisedb(dot)com>, Gregory Stark <stark(at)enterprisedb(dot)com>, PostgreSQL-development Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Proposal: In-Place upgrade concept |
Date: | 2007-07-03 18:09:22 |
Message-ID: | 11394.1183486162@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Zdenek Kotala <Zdenek(dot)Kotala(at)Sun(dot)COM> writes:
> Tom Lane wrote:
>> Yeah, I'm with Heikki on this. What I see as a sane project definition
>> is:
>>
>> * pg_migrator or equivalent to convert the system catalogs
>> * a hook in ReadBuffer to allow a data page conversion procedure to
>> be applied, on the basis of checking for old page layout version.
> pg_migrator is separate tool which requires old postgres version and I
> would like to have solution in postgres binary without old version
> presence. Very often new postgres version is store in same location
> (e.g. /usr/bin) and normal users could have a problem.
Again, you are setting yourself up for complete failure if you insist
on having every possible nicety in the first version. An incremental
approach is far more likely to succeed than a "big bang".
I don't see a strong need to have a solution in-the-binary at all.
I would envision that packagers of, say, 8.4 would include a minimal
8.3 build under an old/ subdirectory, and pg_migrator or a similar
tool could invoke the old postmaster from there to do the catalog
dumping. (In an RPM or similar environment, the user could even
"rpm -e postgresql-upgrade" to get rid of the deadwood after completing
the upgrade, whereas with an integrated binary you're stuck carrying
around a lot of one-time-use code.)
This strikes me as approximately a thousand percent more maintainable
than trying to have a single set of code coping with multiple catalog
representations. Also it scales easily to supporting more than one back
version, whereas doing the same inside one binary will not scale at all.
Keep in mind that if your proposal involves any serious limitation on
the developers' freedom to refactor internal backend APIs or change
catalog representations around, it *will be rejected*. Do not have any
illusions on that point. It'll be a tough enough sell freezing on-disk
representations for user data. Demanding the internal ability to read
old catalog versions would be a large and ongoing drag on development;
I do not think we'll hold still for it. (To point out just one of many
problems, it'd largely destroy the C-struct-overlay technique for
reading catalogs.)
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Zdenek Kotala | 2007-07-03 18:14:27 | Re: Proposal: In-Place upgrade concept |
Previous Message | Martijn van Oosterhout | 2007-07-03 17:59:28 | Re: Proposal: In-Place upgrade concept |