Re: Gradual migration from integer to bigint?

From: Nick Cleaton <nick(at)cleaton(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: James Healy <james(at)yob(dot)id(dot)au>, Bruce Momjian <bruce(at)momjian(dot)us>, PostgreSQL General <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: Gradual migration from integer to bigint?
Date: 2023-10-05 12:25:56
Message-ID: CAFgz3kv_qhYqKh1bvHLyULRspXzksf=Z+kUCNLyxEyy80uJ+4Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Sat, 30 Sept 2023, 23:37 Tom Lane, <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:

>
> I think what you're asking for is a scheme whereby some rows in a
> table have datatype X in a particular column while other rows in
> the very same physical table have datatype Y in the same column.
>

An alternative for NOT NULL columns would be to use a new attnum for the
bigint version of the id, but add a column to pg_attribute allowing linking
the new id col to the dropped old id col, to avoid the table rewrite.

Global read code change needed: on finding a NULL in a NOT NULL column,
check for a link to a dropped old col and use that value instead if found.
The check could be almost free in the normal case if there's already a
check for unexpected NULL or tuple too short.

Then a metadata-only operation can create the new id col and drop and
rename and link the old id col, and fix up fkeys etc for the attnum change.

Indexes are an issue. Require the in-advance creation of indexes like
btree(id::bigint) mirroring every index involving id maybe ? Those could
then be swapped in as part of the same metadata operation.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Stephen Frost 2023-10-05 13:19:23 Re: pgBackRest for a 50 TB database
Previous Message Anuwat Sagulmontreechai 2023-10-05 10:39:47 Ask about Foreign Table Plug-in on Windows Server.