From: | Craig Ringer <ringerc(at)ringerc(dot)id(dot)au> |
---|---|
To: | Rob Sargent <robjsargent(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: change column data type of a big table |
Date: | 2012-08-11 00:42:28 |
Message-ID: | 5025AA74.7000905@ringerc.id.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 08/10/2012 10:26 PM, Rob Sargent wrote:
> Seems like
> for i == 1 to 10
> select into table-with-bigint from table-with-int where <id mod 10 = i>
> delete from table-with-bigint where <id mod 10 = i>
>
> or any such fractionation scheme might do the trick
For that matter, you can ALTER TABLE .. ADD COLUMN the bigint column
without a default, then do batches of:
BEGIN;
UPDATE thetable SET thetable.bigintcol = thetable.intcol WHERE id >
lastbatchmax AND id < batchmin;
COMMIT;
VACUUM thetable;
Because adding a column without a default or using clause doesn't write
anything to the table heap, that'll progressively rewrite the table
in-place. You'll can have other read-only clients continuing to use the
table so long as their transactions are short. Even read/write clients
are OK, and you can add a trigger that sets the bigint column from the
int column whenever an update is done for any other reason.
Eventually you'll find you no longer have any null entries in the bigint
column so you can ALTER TABLE ... ALTER COLUMN ... SET NOT NULL it, drop
the int column, and rename the bigint one within a single transaction.
Active clients will only block briefly.
--
Craig Ringer
From | Date | Subject | |
---|---|---|---|
Next Message | John R Pierce | 2012-08-11 00:53:36 | partitioning |
Previous Message | Alvaro Herrera | 2012-08-10 22:52:04 | Re: [HACKERS] Mailsystem maintenance/migration announcement |