From: | Hannu Krosing <hannu(at)2ndQuadrant(dot)com> |
---|---|
To: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
Cc: | Hannu Krosing <hannu(at)2ndQuadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, "David E(dot) Wheeler" <david(at)justatheory(dot)com>, "pgsql-hackers(at)postgresql(dot)org Hackers" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Duplicate JSON Object Keys |
Date: | 2013-03-08 21:34:20 |
Message-ID: | 513A595C.2090104@2ndQuadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 03/08/2013 10:01 PM, Alvaro Herrera wrote:
> Hannu Krosing escribió:
>> On 03/08/2013 09:39 PM, Robert Haas wrote:
>>> On Thu, Mar 7, 2013 at 2:48 PM, David E. Wheeler <david(at)justatheory(dot)com> wrote:
>>>> In the spirit of being liberal about what we accept but strict about what we store, it seems to me that JSON object key uniqueness should be enforced either by throwing an error on duplicate keys, or by flattening so that the latest key wins (as happens in JavaScript). I realize that tracking keys will slow parsing down, and potentially make it more memory-intensive, but such is the price for correctness.
>>> I'm with Andrew. That's a rathole I emphatically don't want to go
>>> down. I wrote this code originally, and I had the thought clearly in
>>> mind that I wanted to accept JSON that was syntactically well-formed,
>>> not JSON that met certain semantic constraints.
>> If it does not meet these "semantic" constraints, then it is not
>> really JSON - it is merely JSON-like.
>>
>> this sounds very much like MySQLs decision to support timestamp
>> "0000-00-00 00:00" - syntactically correct, but semantically wrong.
> Is it wrong? The standard cited says SHOULD, not MUST.
I think one MAY start implementation with loose interpretation of
SHOULD, but if at all possible we SHOULD implement the
SHOULD-qualified features :)
http://www.ietf.org/rfc/rfc2119.txt:
SHOULD This word, or the adjective "RECOMMENDED", mean that there
may exist valid reasons in particular circumstances to ignore a
particular item, but the full implications must be understood and
carefully weighed before choosing a different course.
We might start with just throwing a warning for duplicate keys, but I
can see no good reason to do so. Except ease of implementation and with
current JSON-AS-TEXT implenetation performance.
And providing a boolean function is_really_json_object(json) to be used in check
constraints seems plain weird .
Otoh, as the spec defines JSON as being designed to be a subset of javascript,
it SHOULD accept select '{"foo": 1, "foo": 2}'::json; but turn it into
'{"foo": 2}'::json; for storage.
I do not think it would be a good idea to leave it to data extraction
functions to always get the last value for foo in this case 2
------------------
Hannu
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Dunstan | 2013-03-08 21:42:26 | Re: Duplicate JSON Object Keys |
Previous Message | David E. Wheeler | 2013-03-08 21:28:53 | Re: Duplicate JSON Object Keys |