Re: Change column type from int to bigint - quickest way

From: Merlin Moncure <mmoncure(at)gmail(dot)com>
To: Andreas Brandl <ml(at)3(dot)141592654(dot)de>
Cc: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: Change column type from int to bigint - quickest way
Date: 2016-11-12 01:17:59
Message-ID: CAHyXU0z8+VSg11kKOfDBR-LE3L-u6-bX_w1b1CRNbh7aG26hiA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Friday, November 11, 2016, Andreas Brandl <ml(at)3(dot)141592654(dot)de> wrote:

> Hi,
>
> we have a pretty big table with an integer-type primary key. I'm looking
> for the quickest way to change the column type to bigint to avoid hitting
> the integer limit. We're trying to avoid prolonged lock situations and full
> table rewrites.
>
> I know I can hack this with an UPDATE on pg_attribute:
>
> -- change id type to bigint
> update pg_attribute set atttypid=20 where attrelid=264782 and attname =
> 'id';
>
> After that I'd need to reflect the change on dependent objects like views
> as well.
>
> Is this safe to do? Are there any unwanted consequences to this?
>
> This is still on 9.1 unfortunately - upgrade is going to follow soon after
> this.
>
> Thanks!
> Andreas
>

Hm. just thinking out loud:

How about making a new column without default that is bigint and updated
via trigger. then you can over time update the table row by row in batches
over many transactions to initialize the id. once completely set you can
do a swap with some carefully written and tested ddl that will exchange
out the name and any dependent objects such as ri triggers.

The exchanging step ought to be quick. you may have to temporarily disable
ri checks to keep things running smooth.

merlin

In response to

Browse pgsql-general by date

  From Date Subject
Next Message aws backup 2016-11-12 21:20:25 pg_dumpall: could not connect to database "template1": FATAL:
Previous Message Tom Lane 2016-11-11 16:12:19 Re: Change column type from int to bigint - quickest way