Re: Alter table fast

From: veem v <veema0000(at)gmail(dot)com>
To: Ron Johnson <ronljohnsonjr(at)gmail(dot)com>
Cc: pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: Alter table fast
Date: 2025-01-09 19:17:14
Message-ID: CAB+=1TVSEZB6XSxzAYSp1O91i3JKmMRgPjckLT0Dokrfd_PxWQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, 9 Jan 2025 at 21:57, Ron Johnson <ronljohnsonjr(at)gmail(dot)com> wrote:

> On Thu, Jan 9, 2025 at 11:25 AM veem v <veema0000(at)gmail(dot)com> wrote:
>
>> Hello,
>> It's postgres version 16.1, we want to convert an existing column data
>> type from integer to numeric and it's taking a long time. The size of the
>> table is ~50GB and the table has ~150million rows in it and it's not
>> partitioned. We tried running the direct alter and it's going beyond hours,
>> so wanted to understand from experts what is the best way to achieve this?
>>
>>
> Out of curiosity, why NUMERIC(15,0) instead of BIGINT?
>
>
>
It's for aligning the database column types to the data model and it's
happening across all the upstream downstream systems.
I was thinking if this can be made faster with the single line alter
statement "Alter table <table_name> alter column <column_name> type
numeric(15,0) USING <column_name>::NUMERIC(15,0);" or through the UPDATE
column rename column strategy. Additionally if this can be further improved
using any session level parameter like "max_parallel_workers_per_gather",
"max_parallel_workers", "maintenance_work_mem", "work_mem"?

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message shammat 2025-01-09 19:52:27 Re: Alter table fast
Previous Message Ron Johnson 2025-01-09 17:10:51 Re: recovery error while running any statement