Re: Avg/max size of these JSON docs in Postgres

From: Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>
To: Michael Lewis <mlewis(at)entrata(dot)com>, Simon Riggs <simon(dot)riggs(at)enterprisedb(dot)com>
Cc: Ram Pratap Maurya <ram(dot)maurya(at)lavainternational(dot)in>, "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org>, Manu Saxena <manu(dot)saxena(at)lavainternational(dot)in>
Subject: Re: Avg/max size of these JSON docs in Postgres
Date: 2021-10-12 22:07:11
Message-ID: 0f5ff36a-7fdb-41d2-bf0f-0219eb0a56d9@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 10/12/21 21:21, Michael Lewis wrote:
> On Tue, Oct 12, 2021 at 12:51 PM Simon Riggs
> <simon(dot)riggs(at)enterprisedb(dot)com <mailto:simon(dot)riggs(at)enterprisedb(dot)com>> wrote:
>
> On Tue, 12 Oct 2021 at 18:53, Michael Lewis <mlewis(at)entrata(dot)com
> <mailto:mlewis(at)entrata(dot)com>> wrote:
> >
> > On Tue, Oct 12, 2021 at 1:43 AM Simon Riggs
> <simon(dot)riggs(at)enterprisedb(dot)com <mailto:simon(dot)riggs(at)enterprisedb(dot)com>>
> wrote:
> >>
> >> On Tue, 12 Oct 2021 at 08:14, Ram Pratap Maurya
> >> <ram(dot)maurya(at)lavainternational(dot)in
> <mailto:ram(dot)maurya(at)lavainternational(dot)in>> wrote:
> >>
> >> > Confirm what is Avg/max size of these JSON docs in Postgres.
> >>
> >> JSON and JSONB datatypes can both be max 1GB in size.
> >
> > That is per row.
>
> No, that is per column.
>
>
> Yes, sorry. My attempt at clarification only muddled things. Each column
> within each row can be up to 1GB in size is how I understand the limit.

But you're kinda right, actually. It very much is per-tuple, because in
various places we form tuples with all the data inline. Consider for
example this:

create table t (a text, b text);
alter table t alter column a set storage extended;
alter table t alter column b set storage extended;

insert into t select repeat(md5('a'), 512*1024*1024/32),
repeat(md5('b'), 512*1024*1024/32);
ERROR: invalid memory alloc request size 1073741880

Clearly, both values are only 512MB (no compression). Yet it fails,
simply because tts_virtual_copy_heap_tuple calls heap_form_tuple to form
a tuple with all values and also the tuple header.

But this succeeds, because the values are 64B shorter, leaving enough
space for the tuple header etc.

insert into t select repeat(md5('a'), 512*1024*1024/32 - 2),
repeat(md5('b'), 512*1024*1024/32);

And you can even select the data:

select * from t;

You can even do this:

update t set a = repeat(md5('a'), 512*1024*1024/32);

which works, so now you have a row with two 512MB values. But then
you'll face this:

select * from t;
ERROR: out of memory
DETAIL: Cannot enlarge string buffer containing 536870922 bytes by
536870912 more bytes.

because printtup() builds a huge string with all the data (and if the
columns had compression, this would be decompressed, because it goes to
the client).

So yeah, there's an explicit 1GB limit per value, but having rows close
to the 1GB limit is going to cause all sorts of unpredictable and rather
painful issues :-(

regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Shaozhong SHI 2021-10-12 23:03:07 Re: Fault with initcap
Previous Message Adrian Klaver 2021-10-12 22:02:55 Re: Fault with initcap