From: | Rob Sargent <robjsargent(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: change column data type of a big table |
Date: | 2012-08-10 14:26:54 |
Message-ID: | 50251A2E.40008@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 08/10/2012 12:05 AM, John R Pierce wrote:
> On 08/09/12 10:31 PM, Anibal David Acosta wrote:
>>
>> I have a very big table, in fact only this table uses approx. 60%
>> space of disk.
>>
>> The table is an standalone table (no one inherit from this and this is
>> not inherit from another).
>>
>> I need to change a int column to bigint.
>>
>> But aparentely this alter recreate the table, my problem is the space,
>> there are no space enough in disk.
>>
>> Is possible to change column datatype without recreating the table?
>>
>>
>
> every tuple of this table will have to be rewritten with the new type.
> you can't avoid that. as none of the old tuples can be reclaimed
> before the ALTER TABLE completes, you'll need sufficient disk space for
> the old and new data.
>
> I see no way of avoiding needing more disk space. if you have
> sufficient space on another drive, you could dump the table data-only,
> then truncate it, then alter the type while its empty, then restore the
> dump.
>
>
>
>
Seems like
for i == 1 to 10
select into table-with-bigint from table-with-int where <id mod 10 = i>
delete from table-with-bigint where <id mod 10 = i>
or any such fractionation scheme might do the trick
And obviously more disk is inevitably required unless this table is
frozen, which doesn't seem likely.
From | Date | Subject | |
---|---|---|---|
Next Message | Geert Mak | 2012-08-10 15:32:34 | is 9.x so much better than 8.x? |
Previous Message | Laszlo Fogas | 2012-08-10 14:14:54 | Re: slowness what only full vacuum can solve |