From: | Arthur Silva <arthurprs(at)gmail(dot)com> |
---|---|
To: | Tom Smith <tomsmith1989sk(at)gmail(dot)com> |
Cc: | Thomas Kellerer <spam_eater(at)gmx(dot)net>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: JSONB performance enhancement for 9.6 |
Date: | 2015-11-29 15:30:45 |
Message-ID: | CAO_YK0W7bBSmT9Hiv0JP1w4_Nwv=6gnf61DK-VtYTV3+GYf4OA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Is this correct? I'm fairly sure jsonb supports lazily parsing objects and
each object level is actually searched using binary search.
Em 29/11/2015 11:25 AM, "Tom Smith" <tomsmith1989sk(at)gmail(dot)com> escreveu:
> Hi, Thanks for everyone's response.
>
> The issue is not just compression, but lack of "indexing" or
> "segmentation" when a
> single doc has, say 2000 top level keys (with multiple levels of subkeys).
> right now, if I query for one key, the whole doc
> has to be first uncompressed and loaded and then search for the single key.
>
> Compared to traditional way of storing each top level key with a separate
> column, this is huge overhead when table scan is required. Some kind of
> "keyed/slotted" storage for the doc could
> help, (for illustration, all keys starting with 'A' would have its own
> storage unit, so on,
> so when I search for key "A1" only that unit would be unpacked and
> traversed to get :"A1" value". it is like postgresql predfine 26
> columns/slots for the whole doc. an internal indexing
> within each doc for fast retrieval of individual field values.
>
> Someone mentioned a plan in roadmap for this route but I'd like to know if
> it is in 9.6 plan.
>
> below url mentions the similar issue. I am not sure if it has been
> completely resolved.
>
>
> http://stackoverflow.com/questions/26259374/jsonb-performance-degrades-as-number-of-keys-increase
>
> below url mentions the potential issue.
>
>
> https://www.reddit.com/r/PostgreSQL/comments/36rdlr/improving_select_performance_with_jsonb_vs_hstore/
>
> Thanks
>
>
>
> On Sun, Nov 29, 2015 at 7:35 AM, Thomas Kellerer <spam_eater(at)gmx(dot)net>
> wrote:
>
>> Tom Smith schrieb am 29.11.2015 um 03:27:
>>
>>> Hello:
>>>
>>> Is there a plan for 9.6 to resolve the issue of very slow
>>> query/retrieval of jsonb fields
>>> when there are large number (maybe several thousands) of top level keys.
>>> Currently, if I save a large json document with top level keys of
>>> thousands and query/retrieve
>>> field values, the whole document has to be first decompressed and load
>>> to memory
>>> before searching for the specific field key/value.
>>>
>>> Thanks in Advance
>>>
>>
>> If you are concerned about the compression overhead, then why don't you
>> use (or try) JSON instead?
>>
>>
>>
>>
>>
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Geoff Winkless | 2015-11-29 16:42:18 | Re: DISTINCT in STRING_AGG |
Previous Message | Bill Moran | 2015-11-29 14:49:41 | Re: JSONB performance enhancement for 9.6 |