Re: How to do faster DML

From: Greg Sabino Mullane <htamfids(at)gmail(dot)com>
To: veem v <veema0000(at)gmail(dot)com>
Cc: pgsql-general(at)lists(dot)postgresql(dot)org, "Peter J(dot) Holzer" <hjp-pgsql(at)hjp(dot)at>, Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>
Subject: Re: How to do faster DML
Date: 2024-02-12 16:28:41
Message-ID: CAKAnmmLcTS7zpeZ3m7mkJxGsu+f+Tv9bp5KV3T52Q+c8DMTm+A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Mon, Feb 12, 2024 at 1:50 AM veem v <veema0000(at)gmail(dot)com> wrote:

> So we were thinking, adding many column to a table should be fine in
> postgres (as here we have a use case in which total number of columns may
> go till ~500+). But then, considering the access of columns towards the
> end of a row is going to add more time as compared to the column which is
> at the start of the row. As you mentioned, accessing 100th column may add 4
> to 5 times more as compared to the access of first column. So , is it
> advisable here to go for similar approach of breaking the table into two ,
> if the total number of column reaches certain number/threshold for a table?
>

I'm not sure of what Peter was testing exactly to get those 4-5x figures,
but I presume that is column access time, which would not mean a direct
effect on your total query time of 4-5x. As far as breaking the table in
two, I would not recommend that as it adds complexity and introduces other
problems. On the other hand, 500 columns is pretty extreme, so maybe things
rarely accessed or searched for could go in another table. Really hard to
say without knowing more about your data access patterns. Hopefully, we
never see a "SELECT *" for a table like that! :)

Cheers,
Greg

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Adrian Klaver 2024-02-12 16:42:48 Re: Query hangs (and then timeout) after using COPY to import data
Previous Message Greg Sabino Mullane 2024-02-12 16:17:41 Re: How to do faster DML