Re: Column type modification in big tables

From: sud <suds1434(at)gmail(dot)com>
To: Lok P <loknath(dot)73(at)gmail(dot)com>
Cc: Greg Sabino Mullane <htamfids(at)gmail(dot)com>, pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: Column type modification in big tables
Date: 2024-08-10 12:17:18
Message-ID: CAD=mzVVS6HbV25M7EA+TJYk22G=GJvQK5Gbe9eeifYSmadYtNA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Sat, Aug 10, 2024 at 12:52 AM Lok P <loknath(dot)73(at)gmail(dot)com> wrote:

>
> On Fri, Aug 9, 2024 at 9:19 PM Greg Sabino Mullane <htamfids(at)gmail(dot)com>
> wrote:
>
>> On Fri, Aug 9, 2024 at 6:39 AM Lok P <loknath(dot)73(at)gmail(dot)com> wrote:
>>
>>> Thank you so much. Will definitely try to evaluate this approach. The
>>> Only concern I have is , as this data is moving downstream with exactly the
>>> same data type and length , so will it cause the downstream code to break
>>> while using this column in the join or filter criteria. Also I believe the
>>> optimizer won't be able to utilize this information while preparing the
>>> execution plan.
>>>
>>
>> Yes, this is not as ideal as rewriting the table, but you asked for
>> another approaches :) As to the impact of your downstream stuff, I think
>> you have to try and see. Not clear what you mean by the optimizer, it's not
>> going to really care about numeric(10) versus numeric(8) or varchar(20) vs
>> varchar(2). It's possible the varchar -> numeric could cause issues, but
>> without real-world queries and data we cannot say.
>>
>>
>>> Another thing , correct me if wrong, My understanding is , if we want
>>> to run the "validate constraint" command after running this "check
>>> constraint with not valid" command, this will do a full table scan across
>>> all the partitions , but it's still beneficial as compared to updating the
>>> columns values for each rows. Correct me if I'm wrong.
>>>
>>
>> Yes, it needs to scan the entire table, but it's a lightweight lock,
>> won't block concurrent access, will not need to detoast, and makes no table
>> or index updates. Versus an entire table rewrite which will do heavy
>> locking, take up tons of I/O, update all the indexes, and generate quite a
>> lot of WAL.
>>
>>
> Thank you so much Greg.
>
> Considering the option, if we are able to get large down time to get this
> activity done.
>
> Some teammates suggested altering the column with "USING" Clause. I am not
> really able to understand the difference, also when i tested on a simple
> table, it seems the "USING" clause takes more time as compared to normal
> ALTER. But again I don't see any way to see the progress and estimated
> completion time. Can you share your thoughts on this?
>
> ALTER TABLE foobar ALTER COLUMN mycol TYPE NUMERIC(3) USING
> mycol::NUMERIC(3);
> VS
> ALTER TABLE foobar ALTER COLUMN mycol TYPE NUMERIC(3) ;
>
> *****
> Another thing also comes to my mind whether we should just create a new
> partition table(say new_part_table) from scratch from the DDL of the
> existing table(say old_part_table) and then load the data into it using
> command (insert into new_part_table.. select..from old_part_table). Then
> create indexes and constraints etc, something as below.
>
> Will this approach be faster/better as compared to the simple "alter table
> alter column approach" as above, considering we will have 4-6 hours of
> downtime for altering three different columns on this ~5TB table?
>
>
> *-- Steps*
> Create table exactly same as existing partition table but with the
> modified column types/lengths.
>
> drop indexes ; (Except PK and FK indexes may be..)
> drop constraints;
>
> insert into new_part_table (...) select (...) from old_part_table;
>
> create indexes concurrently ;
> create constraints; (But this table is also a child table to another
> partition table, so creating the foreign key may be resource consuming here
> too).
>
> drop the old_part_table;
> rename the new_part_table to old_part_table;
> rename all the partitions;
>
> VACUUM old_part_table ;
> ANALYZE old_part_table ;
>

My 2cents.
If you have enough time then from a simplicity point of view, your single
line alter command may look good, but how are you going to see the amount
of progress it has made so far and how much time it's going to take to
finish. And you got ~6hrs of down time but if it fails at 5th hour then you
will be in a bad position.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message David G. Johnston 2024-08-10 13:49:30 Re: Insert works but fails for merge
Previous Message yudhi s 2024-08-10 12:07:31 Re: Insert works but fails for merge