From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com> |
Cc: | Josh Berkus <josh(at)agliodbs(dot)com>, Nicolas Paris <niparisco(at)gmail(dot)com>, "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com>, pgsql-performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: array size exceeds the maximum allowed (1073741823) when building a json |
Date: | 2016-06-08 06:04:19 |
Message-ID: | 717.1465365859@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Michael Paquier <michael(dot)paquier(at)gmail(dot)com> writes:
> On Tue, Jun 7, 2016 at 10:03 PM, Josh Berkus <josh(at)agliodbs(dot)com> wrote:
>> On 06/07/2016 08:42 AM, Nicolas Paris wrote:
>>> Will this 1GO restriction is supposed to increase in a near future ?
>> Not planned, no. Thing is, that's the limit for a field in general, not
>> just JSON; changing it would be a fairly large patch. It's desireable,
>> but AFAIK nobody is working on it.
> And there are other things to consider on top of that, like the
> maximum allocation size for palloc, the maximum query string size,
> COPY, etc. This is no small project, and the potential side-effects
> should not be underestimated.
It's also fair to doubt that client-side code would "just work" with
no functionality or performance problems for such large values.
I await with interest the OP's results on other JSON processors that
have no issues with GB-sized JSON strings.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Rafał Gutkowski | 2016-06-08 08:52:13 | Re: Combination of partial and full indexes |
Previous Message | Michael Paquier | 2016-06-08 05:56:03 | Re: array size exceeds the maximum allowed (1073741823) when building a json |