From: | Marcelo Fernandes <marcefern7(at)gmail(dot)com> |
---|---|
To: | pgsql-docs(at)lists(dot)postgresql(dot)org |
Subject: | Tip box on Adding a Column |
Date: | 2024-11-01 09:05:36 |
Message-ID: | CAM2F1VNAP2bKEtxymaX=j+aV3hTfcZjH7p2jyCDGc_329rUiPQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-docs |
Hi folks,
We have this Tip box under the "Adding a Column" header here:
- https://www.postgresql.org/docs/current/ddl-alter.html#DDL-ALTER-ADDING-A-COLUMN
That says:
> From PostgreSQL 11, adding a column with a constant default value no longer
> means that each row of the table needs to be updated when the ALTER TABLE
> statement is executed. Instead, the default value will be returned the next
> time the row is accessed, and applied when the table is rewritten, making the
> ALTER TABLE very fast even on large tables.
I'm just seeking clarification if this advice is true **even for** new columns
declared with NOT NULL?
Historically, I've had to add new fields on existing big tables with a NULL to
avoid downtime, but it may be different when a DEFAULT is provided?
I have used perf to profile the call-chain for adding a NOT NULL column with
a default versus just an ordinary NULL with a default, and they are fairly
similar.
However, I see these functions being called in both cases:
- ATRewriteTables
find_composite_type_dependencies
systable_beginscan
index_rescan
btrescan
And the names raised my eyebrow... I don't have a deep understanding of the
internals here, so it would be great if someone could clarify this for me.
Thanks,
Marcelo.
From | Date | Subject | |
---|---|---|---|
Next Message | PG Doc comments form | 2024-11-01 09:14:20 | PostgreSQL limits |
Previous Message | Bruce Momjian | 2024-10-31 22:52:34 | Re: Logical replication - initial data synchronization |