I wanted to pull together some of the recent events that have occurred, related to native JSON support within a web browser, that should be of importance to many web developers. This should serve as a sort-of follow-up to my previous post: Native JSON Support is Required.
Early API Standardization Attempts – Last year, a number of attempts were made by the ECMAScript language committee to standardize an API for JSON encoding and decoding, within the language. A few API proposals were examined and discussed, most based upon Crockford’s proposal, but no general consensus was reached. Some general issues with the proposed API were brought forward (similar to those mentioned in my previous post).
JSON2.js – Late last year Crockford quietly released a new version of his JSON API that replaced his existing API. The important difference was that it used a single base object (JSON) instead of extending all native object prototypes (booo!). Its revised API worked as such:
JSON.stringify({name: "John", location: "Boston"}); // => '{"name":"John","location":"Boston"}' JSON.parse('{"name":"John","location":"Boston"}'); // => {name: "John", location: "Boston"}
This version of JSON.js is highly recommended. If you’re still using the old version, please please upgrade (this one, undoubtedly, cause less issues than the previous one).
Dispersion – Even with a newly-proposed API, criticism concerning the inclusion of JSON in the ECMAScript language came to a questionable conclusion. Some implementors weren’t interested in including it, others were, and yet other couldn’t decide on the final API or method names. It was then decided that implementing JSON support in an ECMAScript implementation would be left up to the implementors themselves (unless, of course, some other conclusion is arrived upon in the future).
Mozilla Implements Native JSON – Mozilla was the first to implement native JSON support within it’s browser. Note, however, that this is not a web-page-accessible API but an API that’s usable from within the browser (and by extensions) itself. This was the first step needed to implement the API for further use.
Here is an example of it in use (works within an extension, for example):
var nativeJSON = Components.classes["@mozilla.org/dom/json;1"] .createInstance(Components.interfaces.nsIJSON); nativeJSON.encode({name: "John", location: "Boston"}); // => '{"name":"John","location":"Boston"}' nativeJSON.decode('{"name":"John","location":"Boston"}'); // => {name: "John", location: "Boston"}
Web-accessible Native JSON – The final, and most important, step is being worked on right now – a way to access native JSON encoding and decoding from web pages. How it’ll be accessible is up to some debate (as having its naming conflict with an existing object would be a really bad thing). Regardless, there should be something within the browser by the time the Firefox 3 betas wrap-up.
What’s important about this is that it’s really not a case of “Oh well, guess we’ll have to wait for other browsers to implement this.” Since this is a native implementation (and, thus, very-very fast) existing JSON encoding/decoding libraries can just check for the existence of this particular set of functions and use them directly – gaining a direct, and immediate, performance boost for Firefox users. The same principle applies to features like getElementsByClassName, since they’re available in normal JavaScript code, but are insanely fast when implemented directly by a browser.
Foo (January 30, 2008 at 5:02 am)
“Some implementors weren’t interested in including it, others were, and yet other couldn’t decide on the final API or method names.”
That just sucks. Just implement it already and use whatever names….big deal! We have seen something simular in PHP… where they have had lot of discusions on how namespacing is going to work in php… i mean cmon!
Richard D. Worth (January 30, 2008 at 5:13 am)
This is great news. Looking forward to the eventuality. Thanks for the update, John.
Ric (January 30, 2008 at 7:39 am)
What about schema and validation? Kris Zyp has some great suggestions on Json.Com and there is also some work on selectors (JSONPath) and relative references (a json object contains other objects)
Kris Zyp (January 30, 2008 at 8:54 am)
This is good to hear. However, why would Mozilla not follow Crockford’s lead with API (JSON.parse and JSON.stringify)? He wrote it with the ability to defer to a native implementation when available. The most elegant upgrade path is always when existing code can continue to work, but be replaced by or defer to native code when available. Functionality is not affected and users of implementing browsers enjoy faster speed.
Chris Dary (January 30, 2008 at 9:10 am)
Hey John,
JSON.stringify({name: "John", location: "Boston"});
// => "{'name':'John','location':'Boston'}"
That’s actually incorrect JSON. The single quotes must be double quotes to be valid JSON. It’s a common mistake. Should be:
JSON.stringify({name: "John", location: "Boston"});
// => '{"name":"John","location":"Boston"}'
( You can check it yourself with a little validator I made called JSON Lint – http://www.jsonlint.com – if you’d like )
John Resig (January 30, 2008 at 11:11 am)
@Foo: If only it were that easy – but when something is done by committee/consensus, the end result isn’t always the most desirable. At least in the case of PHP there’s only a single implementor – so they always have the last say.
@Ric: One step at a time. I imagine that Kris’ stuff would have to see some level of adoption first before being considered for inclusion in a browser.
@Kris: Mozilla’s implementation was written as more of a port of Google Caja’s JSON implementation (which, as I understand it, has less bugs in it). Caja actually uses JSON.serialize() and JSON.unserialize(). I don’t remember the exact reasoning but the reasoning for using encode/decode was related to the fact that serialize/unserialize already has specific connotations, whereas encoding and decoding don’t. And “stringify”? Would you seriously want to use a “stringify” function? As it stands, it’s probably a good thing that the API method names aren’t identical – as they don’t provide identical functionality (he allows for a filter function, we have a filter array – stuff like that). All that being said, the final web-user-accessible API hasn’t been finalized yet, it may be very different (although, it definitely won’t be something prominent like ‘JSON.stringify’ until some final consensus on the name/API is arrived upon).
@Chris: My mistake, I was just writing it by hand – it should be fixed now.
Tom (January 30, 2008 at 11:40 am)
My problem is with JSON itself. In an attempt to be compatible with JavaScript and Python, it requires quotes around the keys (as in: {“key”: “value”} being required instead of {key: “value”}) which no one in JS land wants to do. It also makes writing and reading JSON harder (sometimes worse than XML). People also often miss the difference and use things as “JSON” which actually aren’t compliant, in some extreme cases assuming that anything you can eval() is JSON. Also, noting that you can regex validate JSON, most people won’t if they are hacking. (Which I guess leads to the value of native JSON support.) I wonder if any Python code makes similar mistakes. Finally, lack of comment support in JSON is just disastrous for any hand-maintained files.
Summary: JSON is hard to write, hard to read, easy to misunderstand the spec, and easy to implement in insecure fashions. Only with people who understand it well, or with third-party (preferably native) code, and then only used as simple data interchange (not for hand-maintained files) is it useful. Interesting when the spec seems so simple.
Also, I used to be a JSON fan when I first heard of it, but I long ago became disillusioned.
Tom (January 30, 2008 at 3:02 pm)
More specifically, here are the changes I would make to JSON:
1. Allow ‘ for raw strings (no escapes needed).
2. Make quotes optional for field keys.
3. Support multi-line strings.
4. Allow trailing commas.
5. Allow comments.
6. Allow multiple objects on a stream without making them in a big array.
Leaving out some details, I think that would do it. And some details are purposely designed to make sure it wouldn’t be a subset of ECMAScript nor of Python.
Robert Sayre (January 30, 2008 at 3:25 pm)
Kris and John, it’s closer to Crockford’s json2.js now. That didn’t exist when we started, but I’ve moved closer now that it exists.
Chris (January 31, 2008 at 7:06 am)
Wouldn’t it be wonderful if SOMEONE decided to pen a library that parsed JSON without the evil (sorry – eval) function. Then things like AIR would be pleasant for developers.
Chris Snyder (January 31, 2008 at 9:12 am)
@Tom Good points. JSON _should_ be more flexible and dev-friendly than XML is. Several of your points require a parser that is not based on eval(), or at least some filtering prior to the eval() call.
Nothing is stopping you (or ANYONE else) from re-mixing the JSON spec and writing their own parser.
@Chris I always thought the eval() hack was sloppy. It hides in the middle of JSON.parse() like a midget hiding inside a Mechanical Turk. But who really wants to use regex to parse?
Eric Wahlforss (January 31, 2008 at 9:21 am)
Agree, great news indeed!
Kris Zyp (January 31, 2008 at 10:32 am)
@Chris – Crockford has written a non-eval based JSON parser. It was just a lot slower, so no one wanted it. Every tool has it’s purpose, including eval. It is naive to believe it should never be used. But of course native parsing is certainly the best.
Adam (January 31, 2008 at 12:30 pm)
There are some things that are seriously missing in JSON.
First is literal dates, much of what we want to transfer using JSON includes some date information.
Second is Comments, which Tom has already mentioned.
Third is relaxation of requirement for text keys.
Forth is a divorce from ECMAScript. If JSON is to be a lightweight, cross-platform information exchange format, it needs to be developed independent of JavaScript. This will allow more stakeholders (Java, PHP, Ruby people) a say in it.
HMK (January 31, 2008 at 5:25 pm)
When I first heard about JSON I thought it looked very interesting. I was working on something that required the use of French accented characters but I’m buggered if I could find a way to get JSON to accept and display them. Has anything changed recently?
Edward (January 31, 2008 at 7:09 pm)
@HMK – JSON fully supports unicode, so french accented characters are no problem. They don’t even require special encoding, though they can be encoded if desired.
Dean Edwards (January 31, 2008 at 7:33 pm)
> The State of JSON
Or “JSON is in a state”. It’s annoying that fundamental functionality is being fragmented like this. There is no clear ES4 standard either.
Pete Allen (February 4, 2008 at 11:23 pm)
I sure wish this passes jslint, which we run on our entire js code base. Any ideas how to make those case statements without break pass jslint?
Francesco (February 5, 2008 at 3:09 pm)
Hi John.
The new JSON API, during stringify, converts Date objects to ISO date strings. But during parsing, ISO dates remain strings. I would suggest the following to parse that ISO dates too:
<code>
var str = some_JSON_String_with_ISO_dates;
function fromISO2Date(k,v) {
if (typeof v != ‘string’
|| !/^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}Z$/.test(v)
) return v;
eval(“var d = new Date(“+v.replace(/(-|T|:)/g,”,”).replace(/Z/,””)+”)”);
return d;
};
var json = JSON.parse(str,fromISO2Date);
</code>
I modified my personal json2.js to automatically convert the ISO dates, but the above is a good compromise.
Francesco (February 5, 2008 at 3:12 pm)
Oops, I followed the instructions but “code” didn’t work :)
Francesco (February 12, 2008 at 7:44 am)
There was a little error in the function above.
This is the correct line:
eval("var d = new Date('"+v.replace(/(-|T|:)/g,"','").replace(/Z/,"")+"')");