From: | Josh Berkus <josh(at)agliodbs(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: additional json functionality |
Date: | 2013-11-15 23:15:21 |
Message-ID: | 5286AB09.4070607@agliodbs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 11/15/2013 02:59 PM, Merlin Moncure wrote:
> On Fri, Nov 15, 2013 at 4:31 PM, Hannu Krosing <hannu(at)2ndquadrant(dot)com> wrote:
> I think you may be on to something here. This might also be a way
> opt-in to fast(er) serialization (upthread it was noted this is
> unimportant; I'm skeptical). I deeply feel that two types is not the
> right path but I'm pretty sure that this can be finessed.
>
>> As far as I understand merlin is mostly ok with stored json being
>> normalised and the problem is just with constructing "extended"
>> json (a.k.a. "processing instructions") to be used as source for
>> specialised parsers and renderers.
Thing is, I'm not particularly concerned about *Merlin's* specific use
case, which there are ways around. What I am concerned about is that we
may have users who have years of data stored in JSON text fields which
won't survive an upgrade to binary JSON, because we will stop allowing
certain things (ordering, duplicate keys) which are currently allowed in
those columns. At the very least, if we're going to have that kind of
backwards compatibilty break we'll want to call the new version 10.0.
That's why naming old JSON as "json_text" won't work; it'll be a
hardened roadblock to upgrading.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2013-11-15 23:23:58 | Re: autovacuum_work_mem |
Previous Message | Tom Lane | 2013-11-15 23:03:27 | Re: pg_dump insert with column names speedup |