Re: Alter table fast

From: Marco Torres <mtors25(at)gmail(dot)com>
To: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: Alter table fast
Date: 2025-01-12 21:36:10
Message-ID: CAG2LZV58fabgd-nGWe=XkkyrsEb9+zo3rUi_oybuhDh_QV5JBQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

This is the right approach, Peter J. Holzer, from a well season DBA
perspective "ALTER TABLE working_table
ADD COLUMN B INTEGER ; UPDATE working_table
SET B = A;"

Bare in mind the indexes or existing references to an from other tables and
act accordingly-- define the new and drop the old.

Good luck.

On Sun, Jan 12, 2025, 2:20 PM Peter J. Holzer <hjp-pgsql(at)hjp(dot)at> wrote:

> On 2025-01-09 20:52:27 +0100, shammat(at)gmx(dot)net wrote:
> > Am 09.01.25 um 20:17 schrieb veem v:
> > > > Out of curiosity, why NUMERIC(15,0) instead of BIGINT?
> > >
> > > It's for aligning the database column types to the data model and
> > > it's happening across all the upstream downstream systems. I was
> > > thinking if this can be made faster with the single line alter
> > > statement "Alter table <table_name> alter column <column_name> type
> > > numeric(15,0) USING <column_name>::NUMERIC(15,0);"
> >
> > Hmm, I would rather change numeric(15,0) to bigint if I had to "align"
> types across systems.
>
> I'm also wondering what "the data model" is.
>
> If I have numeric(15,0) in an abstract data model, that means that I
> expect values larger than 99,999,999,999,999 but at most
> 999,999,999,999,999. That seems to be oddly specific and also somewhat
> at odds with reality when until now there apparently haven't been any
> values larger than 2,147,483,647. What kind of real world value could
> suddenly jump by more than 5 orders of magnitude but certainly not by 7?
>
> A bigint is much less precise (more than 2,147,483,647 but not more
> than 9,223,372,036,854,775,807) and therefore more suitable for values
> where you don't really know the range.
>
> However, for the problem at hand, I doubt it makes any difference.
> Surely converting a few million values takes much less time than
> rewriting a 50 GB table and all its indexes.
>
> So there isn't really a faster way to do what Veem wants. There may
> however be less disruptive way: He could create a new column with the
> new values (which takes at least as long but can be done in the
> background) and then switch it over and drop the old column.
>
> hp
>
> --
> _ | Peter J. Holzer | Story must make more sense than reality.
> |_|_) | |
> | | | hjp(at)hjp(dot)at | -- Charles Stross, "Creative writing
> __/ | http://www.hjp.at/ | challenge!"
>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Peter J. Holzer 2025-01-12 22:28:28 Automatic upgrade of passwords from md5 to scram-sha256
Previous Message Peter J. Holzer 2025-01-12 21:19:01 Re: Alter table fast