From: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
---|---|
To: | Bruce Momjian <bruce(at)momjian(dot)us> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Peter Geoghegan <pg(at)heroku(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com>, Kevin Grittner <kgrittn(at)ymail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Larry White <ljw1001(at)gmail(dot)com> |
Subject: | Re: jsonb format is pessimal for toast compression |
Date: | 2014-08-14 17:13:55 |
Message-ID: | CAHyXU0ztEmcAUtERhNO1k9N126YYifHDjFP_+N4B8QWNNh=YOw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Aug 14, 2014 at 11:52 AM, Bruce Momjian <bruce(at)momjian(dot)us> wrote:
> On Thu, Aug 14, 2014 at 12:22:46PM -0400, Tom Lane wrote:
>> Bruce Momjian <bruce(at)momjian(dot)us> writes:
>> > Uh, can we get compression for actual documents, rather than duplicate
>> > strings?
>>
>> [ shrug... ] What's your proposed set of "actual documents"?
>> I don't think we have any corpus of JSON docs that are all large
>> enough to need compression.
>>
>> This gets back to the problem of what test case are we going to consider
>> while debating what solution to adopt.
>
> Uh, we just one need one 12k JSON document from somewhere. Clearly this
> is something we can easily get.
it's trivial to make a large json[b] document:
select length(to_json(array(select row(a.*) from pg_attribute a))::TEXT);
select
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2014-08-14 17:24:55 | Re: B-Tree support function number 3 (strxfrm() optimization) |
Previous Message | Fujii Masao | 2014-08-14 16:59:36 | replication commands and index terms |