Storing large large JSON objects in JSONB

From: Wells Oliver <wells(dot)oliver(at)gmail(dot)com>
To: pgsql-admin <pgsql-admin(at)postgresql(dot)org>
Subject: Storing large large JSON objects in JSONB
Date: 2022-07-09 19:37:44
Message-ID: CAOC+FBWRavHQWtQRJ_suSRjtDR9PrnDeRm9CWojr2ogYgDA7Ww@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

We have a couple of tables where we like to store very large JSON objects
in a JSONB column. We don't use these in, err, real time kind of
situations, intensive queries, etc, so I'm not concerned about their
performance necessarily, it's more just to store vendor data that's parsed
as we go into smaller more usable tables, and so they serve as the
historical record, and occasionally we re-process.

I'm wondering if there's some smarter solution with Postgres, these usually
end up in TOAST tables as I understand it, so the actual table is quite
small and then there's some associated TOAST thing that grows and grows
(the given one right now is 500GB).

I don't.. really... care, I guess, that the TOAST tables are that large,
but I thought I might do the diligence here and ask if there's some
different solution, or if this is fine, or some kind of ticking time bomb I
am understanding poorly.

We like these in the database as we have SQL scripts to parse them, so
saving them as files in S3 and rewriting ETL in python or whatever kind
sounds awful.

Anyway, thanks for any insight I'm missing on this front.

--
Wells Oliver
wells(dot)oliver(at)gmail(dot)com <wellsoliver(at)gmail(dot)com>

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Bruce Momjian 2022-07-10 02:38:30 Re: Fwd: Adding more detail to pg_upgrade documentation
Previous Message Peter Eisentraut 2022-07-06 09:21:31 Re: Logical replication and pg_dump for out of band synchronization