From: | Tom Smith <tomsmith1989sk(at)gmail(dot)com> |
---|---|
To: | John R Pierce <pierce(at)hogranch(dot)com> |
Cc: | PostgreSQL General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: JSONB performance enhancement for 9.6 |
Date: | 2015-11-29 13:35:22 |
Message-ID: | CAKwSVFGMzErJT=Wf0HfFxymXWsZKNoW4uTcHg-qFyEXs2iDK7A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Unfortunately, the keys can not be predefined or fixed. it is a doc, the
reason jsonb
is used. It works well for small docs with small number of keys.
but really slow with large number of keys. If this issue is resolved, I
think Postgresql
would be an absolutely superior choice over MongoDB.for document data.
On Sun, Nov 29, 2015 at 12:37 AM, John R Pierce <pierce(at)hogranch(dot)com> wrote:
> On 11/28/2015 6:27 PM, Tom Smith wrote:
>
>> Is there a plan for 9.6 to resolve the issue of very slow query/retrieval
>> of jsonb fields
>> when there are large number (maybe several thousands) of top level keys.
>> Currently, if I save a large json document with top level keys of
>> thousands and query/retrieve
>> field values, the whole document has to be first decompressed and load
>> to memory
>> before searching for the specific field key/value.
>>
>
> If it was my data, I'd be decomposing that large JSON thing into multiple
> SQL records, and storing as much stuff as possible in named SQL fields,
> using JSON in the database only for things that are too ambiguous for SQL.
>
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2015-11-29 14:23:19 | Re: using a postgres table as a multi-writer multi-updater queue |
Previous Message | Tom Smith | 2015-11-29 13:24:12 | Re: JSONB performance enhancement for 9.6 |