From: | Ron Johnson <ronljohnsonjr(at)gmail(dot)com> |
---|---|
To: | yudhi s <learnerdatabase99(at)gmail(dot)com> |
Cc: | pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: How to add columns faster |
Date: | 2024-03-03 19:16:42 |
Message-ID: | CANzqJaCWOmh9-wiSUTt756Zuc=r+nZG6PNcah0b8JLgDTLvCWg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Sun, Mar 3, 2024 at 2:06 PM yudhi s <learnerdatabase99(at)gmail(dot)com> wrote:
> Hello,
> We have an application in which the tables will have approx ~200 columns
> in some of the big transaction tables when we will be serving all the
> business use cases. Currently it has ~100 columns to serve current business
> use cases to start with. As the team is following an agile approach , the
> attributes will be added gradually with each future release. But the
> concern is, the transaction tables will be holding billions of rows (
> because of historical data migration from start) and will be range
> partitioned and each of those partitions can be as big as ~100GB and full
> table size can be in multiple Terabytes.
>
> So our concern was , as the column addition using the traditional "Alter
> table" command in postgres looks to be a full table rewrite , it’s going to
> take a lot of time and resources with each of such releases for these big
> transaction tables. So what is the way to handle such scenarios and do
> these column additions in quick time with no/minimal downtime?
>
Normalizing the database design completely eliminates the problem of any
need to rewrite a table, since new attributes are added as new rows in a
different table.
From | Date | Subject | |
---|---|---|---|
Next Message | yudhi s | 2024-03-03 19:40:01 | Re: How to add columns faster |
Previous Message | Christophe Pettus | 2024-03-03 19:13:11 | Re: How to add columns faster |