From: | Josh Berkus <josh(at)agliodbs(dot)com> |
---|---|
To: | Arthur Silva <arthurprs(at)gmail(dot)com> |
Cc: | "David E(dot) Wheeler" <david(at)justatheory(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Andrew Dunstan <andrew(at)dunslane(dot)net>, Jan Wieck <jan(at)wi3ck(dot)info> |
Subject: | Re: jsonb format is pessimal for toast compression |
Date: | 2014-09-12 16:52:11 |
Message-ID: | 541324BB.5020002@agliodbs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 09/11/2014 06:56 PM, Arthur Silva wrote:
>
> In my testings with the github archive data the savings <->
> performance-penalty was fine, but I'm not confident in those results
> since there were only 8 top level keys.
Well, we did want to see that the patch doesn't create a regression with
data which doesn't fall into the problem case area, and your test did
that nicely.
> For comparison, some twitter api objects[1] have 30+ top level keys. If
> I have time in the next couple of days I'll conduct some testings with
> the public twitter fire-hose data.
Yah, if we have enough time for me to get the Mozilla Socorro test
environment working, I can also test with Mozilla crash data. That has
some deep nesting and very large values.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2014-09-12 16:57:23 | Re: Stating the significance of Lehman & Yao in the nbtree README |
Previous Message | Robert Haas | 2014-09-12 16:49:15 | Re: bad estimation together with large work_mem generates terrible slow hash joins |