From: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
---|---|
To: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Mike Rylander <mrylander(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Joseph Adams <joeyadams3(dot)14159(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Proposal: Add JSON support |
Date: | 2010-03-29 00:48:20 |
Message-ID: | 4BAFF8D4.9090308@dunslane.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Andrew Dunstan wrote:
>
>
> Robert Haas wrote:
>> On Sun, Mar 28, 2010 at 8:23 PM, Mike Rylander <mrylander(at)gmail(dot)com>
>> wrote:
>>
>>> In practice, every parser/serializer I've used (including the one I
>>> helped write) allows (and, often, forces) any non-ASCII character to
>>> be encoded as \u followed by a string of four hex digits.
>>>
>>
>> Is it correct to say that the only feasible place where non-ASCII
>> characters can be used is within string constants? If so, it might be
>> reasonable to disallow characters with the high-bit set unless the
>> server encoding is one of the flavors of Unicode of which the spec
>> approves. I'm tempted to think that when the server encoding is
>> Unicode we really ought to allow Unicode characters natively, because
>> turning a long string of two-byte wide chars into a long string of
>> six-byte wide chars sounds pretty evil from a performance point of
>> view.
>>
>>
>>
>
> We support exactly one unicode encoding on the server side: utf8.
>
> And the maximum possible size of a validly encoded unicode char in
> utf8 is 4 (and that's pretty rare, IIRC).
>
>
Sorry. Disregard this. I see what you mean.
Yeah, I thing *requiring* non-ascii character to be escaped would be evil.
cheers
andrew
From | Date | Subject | |
---|---|---|---|
Next Message | Mike Rylander | 2010-03-29 00:52:59 | Re: Proposal: Add JSON support |
Previous Message | Andrew Dunstan | 2010-03-29 00:46:11 | Re: Proposal: Add JSON support |