From: | Michael Lewis <mlewis(at)entrata(dot)com> |
---|---|
To: | charles meng <xlyybz(at)gmail(dot)com> |
Cc: | PostgreSQL General <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Alter the column data type of the large data volume table. |
Date: | 2020-12-03 17:10:42 |
Message-ID: | CAHOFxGpqi_=T2JQf+eNef3A5XR-Z9FeSB-hpZkG5aOr0PPvi0g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Dec 2, 2020 at 11:53 PM charles meng <xlyybz(at)gmail(dot)com> wrote:
> Hi all,
>
> I have a table with 1.6 billion records. The data type of the primary key
> column is incorrectly used as integer. I need to replace the type of the
> column with bigint. Is there any ideas for this?
>
> Solutions that have been tried:
> Adding temporary columns was too time-consuming, so I gave up.
> Using a temporary table, there is no good way to migrate the original
> table data to the temporary table
>
> Thanks in advance.
>
You can add a new column with NO default value and null as default and have
it be very fast. Then you can gradually update rows in batches (if on
PG11+, perhaps use do script with a loop to commit after X rows) to set the
new column the same as the primary key. Lastly, in a transaction, update
any new rows where the bigint column is null, and change which column is
the primary key & drop the old one. This should keep each transaction
reasonably sized to not hold up other processes.
From | Date | Subject | |
---|---|---|---|
Next Message | Rich Shepard | 2020-12-03 17:18:05 | Re: Alter the column data type of the large data volume table. |
Previous Message | Daniel Verite | 2020-12-03 15:32:11 | Re: psql > split > queries & output |