From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Greg Stark <stark(at)enterprisedb(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Bruce Momjian <bruce(at)momjian(dot)us>, Josh Berkus <josh(at)agliodbs(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, Oleg Bartunov <oleg(at)sai(dot)msu(dot)su>, Teodor Sigaev <teodor(at)sigaev(dot)ru> |
Subject: | Re: pg_migrator and an 8.3-compatible tsvector data type |
Date: | 2009-06-01 15:34:49 |
Message-ID: | 603c8f070906010834p3e2f80fcn99610c61aa004e40@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Jun 1, 2009 at 11:10 AM, Greg Stark <stark(at)enterprisedb(dot)com> wrote:
> On Mon, Jun 1, 2009 at 4:03 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> On Sun, May 31, 2009 at 11:49 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>>> "Updating the catalog tables directly via SQL"? Good luck making that
>>> work. If you ever get it to work at all, it'll require a pile of hacks
>>> that will make pg_migrator look like paradise.
>>
>> For clarity, I really mean "from a standalone backend", but ideally
>> I'd like it to be SQL.
>
> Keep in mind that you have catalogs in all the databases, and even in
> standalone mode you need those catalogs to find the, er, catalogs.
> There's a reason bootstrap mode is so limited and even then it needs
> some of the catalogs to already be in place.
Yep. Changes to the schema of the bootstrap tables are definitely the
toughest nut to crack. I haven't been convinced that it's impossible,
but given my relative level of knowledge of the code compared to Tom,
that could well be an indication that I'm overly optimistic.
>>> (Which is not to say that pg_migrator isn't a hack; it surely is.
>>> But it looks like the least painful approach available.)
>>
>> Maybe. It seems that we don't have a good way of handling datatype
>> conversions. The approaches that have been proposed for tsvector
>> wouldn't work at all but for the fact that the new output function can
>> handle the old internal representation, which is not something that we
>> can guarantee in every case.
>
> Well I think all we need for that is to have pg_migrator provide the
> old output function wrapped up in a migrate_foo() C function
Well that would be better, but it still leaves the database temporarily broken.
>> And, even so, they leave the database in
>> a broken state until the post-migration scripts have been run. The
>> good news is that tsvector is not a datatype that everyone uses, and
>> those who do probably don't use it in every table, but what happens
>> when we want to change numeric incompatibly?
>
> Or, say, timestamp...
Yeah.
>> We really need to figure out an approach that lets us keep the old
>> datatypes around under a different name while making the original name
>> be the new version of the datatype. That way people can migrate and
>> be up, and deal with the need to rewrite their tables at a later time.
>
> I do agree that having to rewrite the whole table isn't really
> "upgrade-in-place".
>
> But the work to support multiple versions of data types is more than
> you're describing. You need to be concerned about things like joins
> between tables when some columns are the old data type and some the
> new, etc.
True.
> Ultimately people will have to convert the data types sometime.
Yes they will, but not having to do it as part of the upgrade is
important. What particularly bothers me is the possibility that the
database comes on line and starts letting clients in (who don't know
that it's broken) while the breakage is still present.
...Robert
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Huxton | 2009-06-01 15:38:55 | Re: ruby connect |
Previous Message | Aidan Van Dyk | 2009-06-01 15:27:36 | Re: pg_standby -l might destory the archived file |