From: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Bruce Momjian <bruce(at)momjian(dot)us>, Peter Geoghegan <pg(at)heroku(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com>, Kevin Grittner <kgrittn(at)ymail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Larry White <ljw1001(at)gmail(dot)com> |
Subject: | Re: jsonb format is pessimal for toast compression |
Date: | 2014-08-14 01:47:15 |
Message-ID: | 53EC1523.9030905@dunslane.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 08/13/2014 09:01 PM, Tom Lane wrote:
> I wrote:
>> That's a fair question. I did a very very simple hack to replace the item
>> offsets with item lengths -- turns out that that mostly requires removing
>> some code that changes lengths to offsets ;-). I then loaded up Larry's
>> example of a noncompressible JSON value, and compared pg_column_size()
>> which is just about the right thing here since it reports datum size after
>> compression. Remembering that the textual representation is 12353 bytes:
>> json: 382 bytes
>> jsonb, using offsets: 12593 bytes
>> jsonb, using lengths: 406 bytes
> Oh, one more result: if I leave the representation alone, but change
> the compression parameters to set first_success_by to INT_MAX, this
> value takes up 1397 bytes. So that's better, but still more than a
> 3X penalty compared to using lengths. (Admittedly, this test value
> probably is an outlier compared to normal practice, since it's a hundred
> or so repetitions of the same two strings.)
>
>
What does changing to lengths do to the speed of other operations?
cheers
andrew
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Paquier | 2014-08-14 01:48:14 | Re: option -T in pg_basebackup doesn't work on windows |
Previous Message | Fujii Masao | 2014-08-14 01:40:03 | Re: replication commands and log_statements |