Re: Design strategy for table with many attributes

From: David Rowley <dgrowleyml(at)gmail(dot)com>
To: Lok P <loknath(dot)73(at)gmail(dot)com>
Cc: Guyren Howe <guyren(at)gmail(dot)com>, pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: Design strategy for table with many attributes
Date: 2024-07-05 08:13:54
Message-ID: CAApHDvp6FJW0MnchEfis=2ANUTNu4CHVQOcs95=NwTg4SQrxtQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, 5 Jul 2024 at 19:53, Lok P <loknath(dot)73(at)gmail(dot)com> wrote:
> As David suggested it breaks if a row exceeds the 8k limit I. E a single page size , will that still holds true if we have a column with JSON in it?

You wouldn't be at risk of the same tuple length problem if you
reduced the column count and stored the additional details in JSON.
Each varlena column is either stored in the tuple inline, or toasted
and stored out of line. Out of line values need an 18-byte pointer to
the toasted data. That pointer contributes to the tuple length.

This isn't me advocating for JSON. I'm just explaining the
limitations. I think I'd only advocate for JSON if the properties you
need to store vary wildly between each tuple. There's a large
overhead to storing JSON labels, which you'd pay the price of for each
tuple. That sounds like it would scale terribly with the data volumes
you've suggested you'll be processing.

David

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Lok P 2024-07-05 08:53:17 Re: Design strategy for table with many attributes
Previous Message Lok P 2024-07-05 07:53:14 Re: Design strategy for table with many attributes