Re: Alter the column data type of the large data volume table.

From: Rich Shepard <rshepard(at)appl-ecosys(dot)com>
To: PostgreSQL General <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: Alter the column data type of the large data volume table.
Date: 2020-12-03 17:18:05
Message-ID: alpine.LNX.2.20.2012030915280.29996@salmo.appl-ecosys.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, 3 Dec 2020, Michael Lewis wrote:

> On Wed, Dec 2, 2020 at 11:53 PM charles meng <xlyybz(at)gmail(dot)com> wrote:

>> I have a table with 1.6 billion records. The data type of the primary key
>> column is incorrectly used as integer. I need to replace the type of the
>> column with bigint. Is there any ideas for this?

> You can add a new column with NO default value and null as default and have
> it be very fast. Then you can gradually update rows in batches (if on
> PG11+, perhaps use do script with a loop to commit after X rows) to set the
> new column the same as the primary key. Lastly, in a transaction, update
> any new rows where the bigint column is null, and change which column is
> the primary key & drop the old one. This should keep each transaction
> reasonably sized to not hold up other processes.

Tell me, please, why

ALTER TABLE <tablename> ALTER COLUMN <columnname> SET DATA TYPE BIGINT

will not do the job?

I've found some varchar columns in a couple of tables too small and used the
above to increase their size. Worked perfectly.

Regards,

Rich

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Michael Lewis 2020-12-03 17:26:08 Re: Alter the column data type of the large data volume table.
Previous Message Michael Lewis 2020-12-03 17:10:42 Re: Alter the column data type of the large data volume table.