From: | "David E(dot) Wheeler" <david(at)justatheory(dot)com> |
---|---|
To: | "pgsql-hackers(at)postgresql(dot)org Hackers" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Duplicate JSON Object Keys |
Date: | 2013-03-07 19:48:45 |
Message-ID: | 60885A46-5CC8-4B40-BF35-B4C28BFD5480@justatheory.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
This behavior surprised me a bit:
david=# select '{"foo": 1, "foo": 2}'::json;
json
----------------------
{"foo": 1, "foo": 2}
I had expected something more like this:
david=# select '{"foo": 1, "foo": 2}'::json;
json
------------
{"foo": 2}
This hasn’t been much of an issue before, but with Andrew’s JSON enhancements going in, it will start to cause problems:
david=# select json_get('{"foo": 1, "foo": 2}', 'foo');
ERROR: field name is not unique in json object
Andrew tells me that the spec requires this. I think that’s fine, but I would rather that it never got to there.
In the spirit of being liberal about what we accept but strict about what we store, it seems to me that JSON object key uniqueness should be enforced either by throwing an error on duplicate keys, or by flattening so that the latest key wins (as happens in JavaScript). I realize that tracking keys will slow parsing down, and potentially make it more memory-intensive, but such is the price for correctness.
Thoughts?
Thanks,
David
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2013-03-07 19:50:11 | Re: REFRESH MATERIALIZED VIEW locklevel |
Previous Message | anarazel@anarazel.de | 2013-03-07 19:00:08 | Re: REFRESH MATERIALIZED VIEW locklevel |