There was an interesting piece of JavaScript performance analysis done recently, by the PBWiki team. They wanted to understand a few things about how quickly JavaScript libraries loaded (obviously, their loading speed grossly effecting the total loading speed of a page). They set up a system to gather input from random browsers, aggregating the results into a final breakdown. There’s a lot that application, and browser, developers can learn from the results – the total information available is actually quite profound:
JavaScript Packaging Techniques
When distributing a piece of JavaScript code it’s traditional to think that the smallest (byte-size) code will download and load the fastest. This is not true – and is a fascinating result of this survey. Looking at the speed of loading jQuery in three forms: normal, minified (using Yahoo Min), and packed (using Packer). By order of file size, packed is the smallest, then minifed, then normal. However, the packed version has an overhead: It must be uncompressed, on the client-side, using a JavaScript decompression algorithm. This unpacking has a tangible cost in load time. This means, in the end, that using a minifed version of the code is much faster than the packed one – even though its file size is quite larger.
Packaging Comparison (loading jquery, all variants)
Minified | Time Avg | Samples |
---|---|---|
minified | 519.7214 | 12611 |
packed | 591.6636 | 12606 |
normal | 645.4818 | 12589 |
Next time you pick a compression technique, remember this formula:
Total_Speed = Time_to_Download + Time_to_Evaluate
JavaScript Library Performance
The next nugget of information, that we can unearth, is the total performance of JavaScript libraries, when loading within a page (this includes their transfer time and their evaluation time). Thus, a library that is both smaller and simpler will load faster. Looking at the results you can see a, comparatively, large lead for jQuery (200-400ms – a perceptible difference in speed).
Average Time to Load Toolkit (non cached, gzipped, minified)
Toolkit | Time Avg | Samples |
---|---|---|
jquery-1.2.1 | 732.1935 | 3152 |
dojo-1.0.1 | 911.3255 | 3143 |
prototype-1.6.0 | 923.7074 | 3144 |
yahoo-utilities-2.4.0 | 927.4604 | 3141 |
protoculous-1.0.2 | 1136.5497 | 3136 |
Now, some might argue that testing the speed of un-cached pages would be unfair, however according to Yahoo’s research on caching, approximately 50% of users will never have the opportunity to have the page contents be cached. Thus, making sure that you page loads quickly both on initial load, and subsequent loads, should be of the utmost importance.
Average Time to Load Toolkit (cached, gzipped, minified)
Toolkit | Time Avg | Samples |
---|---|---|
yahoo-utilities-2.4.0 | 122.7867 | 3042 |
jquery-1.2.1 | 131.1841 | 3161 |
prototype-1.6.0 | 142.7332 | 3040 |
dojo-1.0.1 | 171.2600 | 3161 |
protoculous-1.0.2 | 276.1929 | 3157 |
Once you examine cached speeds the difference becomes much less noticeable (10-30ms difference – with the exception of Prototype/Scriptaculous). Since these results are completely cached we can gauge, roughly, the overhead that’s provided by file transfer, in comparison to evaluation speed).
If nothing else, I think this type of analysis warrants further examination. Using user-generated input, against live datasets, to create real-world performance metrics is quite lucrative to everyone involved – to users, framework developers, and browser vendors.
» More information on the web browser performance shown below.
Web Browser Performance
Finally, this test gives us the opportunity to examine the load speed of some real-world code – specifically, the performance of evaluating the scripts when they’re retrieved from a cache.
Browser Comparison (loading jquery from cache)
Browser | Time Avg | Samples |
---|---|---|
Firefox 3.x | 14.0000 | 2 |
Safari | 19.8908 | 284 |
IE 7.x | 27.4372 | 247 |
IE 6.x | 41.3167 | 221 |
Firefox 2.x | 111.0662 | 2009 |
Opera 5.x | 925.3057 | 157 |
There’s a couple things that we can note about the results:
- Even though there aren’t that many samples yet, it’s pretty obvious that Firefox 3 is going to be much faster than Firefox 2 – the full extent will only become apparent after its final release, and further analysis.
- There was a definite jump in performance in IE 7 from IE 6.
- The Opera results are suspect – they were listed as Opera 5 (which doesn’t make sense – who still uses Opera 5?) and are too high – causing me to suspect tampering.
Dave (February 5, 2008 at 4:27 am)
Also, why are there only 2 Firefox 3.x’s? I’m one of them. Who’s the other?
Another thing to remember is that tests like this are generally skewed. I can’t imagine many non-developers would be interested and run the test. I guess the geekeir you are, the faster internet connection you have.
Da Scritch (February 5, 2008 at 4:55 am)
Yep. Packed is a stupidity for performances, and even better : MSIE (and every other browsers) is not able to cache a packed js library, so you can lose all benefits at each page loading.
Minimified js is my preference, and better are my hair.
alsanan (February 5, 2008 at 5:16 am)
where is mootools? I guess that you don’t consider MooTools as one of the main ones, but I see that you have included protoculous that I don’t know or it’s not a library at all (scriptaculous!?).
Andrea Giammarchi (February 5, 2008 at 5:44 am)
Well done John, but I wonder why noone has never tried packed.it … it uses different techniques both for minifying and serving file(s).
For example, try to compare jQuery + jQuery UI + CSS in a *single* file VS every other methods (or !YUI with every CSS, jQuery + Ext or whatever you want) ;)
bugrain (February 5, 2008 at 5:55 am)
alsanan – these are not John’s results, they are from the PBWiki team (as stated at the top). I guess their choice of libraries is just to get a cross-section – not a recommendation.
protoculous is just a merge of Prototype/script.aculo.us in to one file. Not sure how valid this is as the various flavours of it are already compressed in some way which may account for it’s apparent poor performance in the ‘cached’ test (John’s comment is slightly misleading here as he refers to “Prototype/Scriptaculous”, meaning protoculous.
That said, the results are wrong in some respects (as Dave hinted at). I’ve just run the test a couple of times with FF3.0b2 yet in the “Browser Comparison (loading jquery from cache)” table the row for “Mozilla/Netscape 6.x” was incremented. All other sample totals went up by one so that must be me! Seems like the browser sniffing is not working properly.
zimbatm (February 5, 2008 at 8:25 am)
I always felt like packed was a bad solution. Most recent web servers and clients support gzipped content encoding anyway, and their implementation is certainly much better. Not counting that debugging such code is needlessly hard.
Favorite Browser (February 5, 2008 at 8:54 am)
Haha, Opera 5, by the way, which version of Safari was it? 2 or 3?
John Resig (February 5, 2008 at 10:37 am)
@Dave: That’s a good point – meaning that the difference between the libraries would become even more pronounced, I would assume.
@Da Scritch: Umm, Packed script can be cached, definitely. The difference is that they still have the overhead of unpacking themselves on every page access (which is what is shown, above).
@alsanan: I didn’t create this test so I wouldn’t know about Mootools. Protoculous is a common abbreviation for Prototype + Scriptaculous.
@bugrain: How are they already compressed? I think it just has more to do with the fact that there’s more code being loaded, causing longer page load times.
Ah, that’s a good catch with detecting Firefox 3 = Netscape 6. That’s frustrating.
@Favorite Browser: I’m not sure – it seems like the test creators just lumped them together as a single entity, which seems silly.
Dean Edwards (February 5, 2008 at 10:57 am)
@Da Scritch – MSIE can cache packed code. It’s still normal JavaScript.
Packer 3.1 is due out soon. It has increased decode times of about 200-300%. It is worth pointing out that Packer is not just about base62 compression, it is a very efficient minifier too.
Kevin H (February 5, 2008 at 10:59 am)
I bet the Opera results, like the iPhone results are primarily based on cellphone tests. The biggest problem with this experiment, really, is that they did not control for connection speed. They should have had a test that just downloaded a fixed-size file, and then weighted their results based on how long the download took. The reason the Firefox 3.x results are so high, probably, is because it has only been tested on two clients, and they both probably have fat-pipe connections to the internet.
Kevin H (February 5, 2008 at 11:02 am)
Oh, and I bet the misidentification of FF3 is because of the “Minefield” user agent string.
Joao Pedrosa (February 5, 2008 at 12:43 pm)
What about the accompanying json.js file of jQuery? Other libraries might not depend on it, which both saves a connection and bloats a little more their main files. :P
Schrep (February 5, 2008 at 1:30 pm)
Are these samples comparing runs across different environments? you have to be careful to only compare test runs on exactly the same environment.
Batiste (February 5, 2008 at 1:35 pm)
I have found similar informations once using firebug:
http://batiste.dosimple.ch/blog/2007-07-02-1/
But I don’t enough about firebug reporing to kwnow if it’s relevant.
Scott (February 5, 2008 at 1:53 pm)
Another thing to consider is the design of the framework you want to use. Frameworks like JQuery and Prototype often come in one large file. Where Dojo and YUI are comprised of many different files. Which allows you to only include the functionality you need and reduce the amount of script you need to load.
Jakub Nesetril (February 5, 2008 at 2:14 pm)
As well, you can safely assume that the hardware and internet connection speeds are not evenly distributed among browsers – i.e. the slow times for IE6 can easily mean slower tubes and older hardware.
Without well understood demographics, user-generated tests like these are easily misunderstood.
Michal Till (February 5, 2008 at 3:35 pm)
It’s a pity that they didn’t added YUI Compressor.
mgroves (February 5, 2008 at 3:38 pm)
Those must be Opera Mobile or Opera Mini. It’s been shown that Opera for desktops is far and away among the fastest at JavaScript (see http://celtickane.com/webdesign/jsspeed.php)
Patrick Donelan (February 5, 2008 at 7:08 pm)
I’d say take the results with a grain of salt – they’re not likely to remain consistent as you vary connection speed and cpu speed.
John Resig (February 5, 2008 at 7:42 pm)
@Kevin H: I really like the “weighting” idea – that’s a good thing to consider for a future test, although, the “cached, gzipped, minified” and “loading jquery from cache” tests both ignore that aspect (no network involved).
@Joao: I don’t see what that has to do with anything – jQuery doesn’t depend on any form of JSON code – all JSON deserialization is built into the library itself.
@Schrep: Yep, these are run on all sorts of environments – the full data isn’t available so it isn’t clear what the distribution is per browser is (if some of the same browser are abnormally slow, or abnormally fast).
@Scotth: That’s not completely true – user’s take the path of least resistance – and both Dojo and Yahoo UI provide a single package JS file just like jQuery and Prototype. Additionally, both Prototype and jQuery provide individual files, like Yahoo UI and Dojo – in fact, all frameworks provide a single file download, now, by default and all provide individual files. In summary: While it use to be true, it’s not so much anymore.
@Michal Till: These tests were all done with YUI Compressor – I mentioned it in the blog post.
Joao Pedrosa (February 5, 2008 at 8:03 pm)
John,
I guess it’s about the serialization then? As in your article:
* http://ejohn.org/blog/the-state-of-json/
When using Prototype, I don’t need to load the external JSON.js library, as Prototype seems to do the (serialization?) job itself by default for instance.
You are right though in that jQuery doesn’t depend on this external JSON.js library to work. But it also does not do the job of this external JSON.js library, right? So it’s up to the user to load it when necessary or something? Even so, I think I would need it quite often if I were to use jQuery anyway, so it would mean two files to be loaded separately, unless they were concatenated previously in one file to optimize it a bit.
My point was just that this external JSON.js file was not computed in your benchmarks for jQuery, while the functionality in it can come in handy often and is included in other similar libraries by default.
Am I still off base?
John Resig (February 5, 2008 at 10:33 pm)
@Joao: Sure, that makes sense for you and your application – however nothing that I’ve seen indicates that this is the case for most jQuery users. Submitting serialized JSON data via an Ajax request is, overwhelmingly, an exception. By far, the most common method is to serialize a data structure into a query string – which we support natively. If there was a significant demand for JSON serialization in jQuery applications then we would certainly include it, but that demand just does not exist. For example, I don’t think there’s any major jQuery-using sites also including json.js for serialization.
bugrain (February 6, 2008 at 7:39 am)
@John
I’d assumed for Protoculous they were using http://protoculous.wikeo.be/ – a combined/compressed (in several ways) Prototype/script.aculo.us (old versions). I could be wrong, it’s been known ;)
So “Protoculous is a common abbreviation for Prototype + Scriptaculous.” is not strictly true in this case.
BTW, as an aside, I’m not keen on the text highlighting (no ‘inverse’) on this blog
Michael Jackson (February 6, 2008 at 10:42 am)
@John: I appreciate your in-depth analysis of this and so many other topics related to JavaScript. I’ve always wondered if perhaps library authors tend to place a bit too much emphasis on file size and not enough on the big picture of total loading time.
Scott Blum (February 6, 2008 at 12:04 pm)
I’m obviously biased, but I feel comfortable saying that these numbers validate the GWT approach of monolithic compilation. We can see that even under the best conditions, using just one external library will cost you a minimum of 100ms, before you even load any application code. And three-quarters of a second in the uncached case!
Contrast this with our Mail sample:
http://gwt.google.com/samples/Mail/Mail.html
I’m just eyeballing it, but for me the entire app generally starts in under a second even on a hard refresh (Ctrl-F5 in Firefox), and around a quarter second when cached. I’m not just talking script parsing, but time to interactivity (which is really what matters), including application code execution, DOM construction and rendering. Try it for yourself.
By the way: this is actually slower than it ought to be, because we’re mashing up Google Analytics. Compiling Analytics in directly would make it even faster.
Dean Edwards (February 6, 2008 at 5:38 pm)
@Scott, I think that you are wrong. The key to modern web scripting is feature detection. You can’t always do that on the server (e.g. Google Gears). There is room for abstraction on the server but replacing one black box with another is not a solution for me.
Kean Tan (February 6, 2008 at 8:04 pm)
@Dean Edward in reply to Da Scritch – MSIE can cache packed code. It’s still normal JavaScript.
Da Scritch might have meant that the evaled code is not cached and the packed code is evaled for each page load. I haven’t tested that statement yet but is that the case?
Scott Blum (February 6, 2008 at 8:08 pm)
@Dean: Just to clear up any misconceptions, GWT is not a server-side technology. Our browser and capability detection happens entirely on the client. A small bootstrap script sniffs the client side environment up front, one time, then uses that information to fetch a precompiled script that is precisely optimized for that particular environment.
Contrast this approach with a traditional JS library, where traditional browser and feature detection typically happens over and over again at every “leaf” method.
Concrete example: a user without Gears installed runs a Gears-enabled GWT app. Not only does that the user does not download any of the Gears-related code, they also avoid having to run code that looks like:
if (framework.isGearsPresent()) {
doGearsThing();
} else {
doNonGearsThing();
}
TM (February 7, 2008 at 2:30 am)
Think of what GWT does as a really really sophisticated ANT build for a JS library, that packs several different versions of your JS library together, based on every permutation of features (Gears/NoGears, Safari vs Opera vs FF vs IE, Chinese vs English, etc), and selects the appropriate one at startup. That’s a gross simplification, but the closest analogy. What’s done manually for people trying to shrink JS frameworks is done automatically, every piece of dead code excised in each permutation, packed down and obfuscated due to whole program analysis, with tons of optimizations that just aren’t possible to do in a pure JS packer.
You get to use all your favorite abstractions and patterns, but you don’t pay for what you don’t use, and the result is smaller and faster than hand coded JS for any non-trivially sized app.
Badcop666 (February 7, 2008 at 5:38 pm)
I suggest everyone takes a step back from the latest ‘wicked’ UI you are working on and ask yourself if it is really delivering an ‘enhanced user experience’.
While this is definitely a useful and fascinating discussion and area of research (for those faced with significant page volumes and/or critical reliability requirements) – the main point is still that a workable model for client-side scripted interaction is still largely missing. Efficiency behind the scenes will be appreciated *once the sites themselves have any chance of working properly, and, most importantly, thoughtfully*.
Around 90% of interactive websites I encounter announce themselves with a javascript error, firstly (being a developer my eye is somehow drawn to this) followed by critical functionality which doesn’t work in the browser I’m using, if at all, clunkily, slowly, unclearly. So that’s still the major problem in my opinion, sorry if a little off-topic.
What we’ve found is that gzip gives the best bang-for-buck when considering the value of having the same code live as in dev. Minifying, while easily automated, breaks that extremely useful connection and, hence, doesn’t compete with gzip when this downside is considered.
Most gzip installations (IIS, Apache) will allow caching and automatic re-compression when files are modified. As with most problems, Internet Explorer 6, we suspect, isn’t entirely happy with gzip in some situations, but we couldn’t confirm this 100%.
Where to place external script tags? Are they modifying or accessing the DOM? Where to place script tags to allow the css, images and markup to render as quickly as possible? We’ve tried head and footer placements, and use onload to defer all execution until the DOM has correctly rendered (once again, IE6 is the most unforgiving).
These questions add, unfortunately, additional dimensions to the test suite being discussed here. And importantly, the answers may not apply to all situations.
Techniques for fetching and rendering or parsing external js and html/css are maturing alongside browser reliability (ahem!) – which means that payloads can be heavily reduced to give faster page rendering and interaction – critical.
In a js development environment, working on many js files, we also looked at backend aggregation of two or more files – focusing on the break-even point when compared to the overhead of each http connection. In these cases an http profiler such as fiddler or the many tools for Firefox, are invaluable. We see the http connection followed by the browser js engine parsing the code, then moving on to the next external file, and so on. Images are more efficient with browsers supporting a reasonable number of simultaneous fetches – incidentally, however, the use of sprites and css (put your nav images and icons into one file) can have a huge benfit.
Generally I suggest anyone delivering 100k of javascript into a page view think seriously about the costs and benefits. If you show me a well designed page, rendered as simple markup and css I’ll ask you what benefit the 100k delivers to the user. ‘I can but should I?’ is often the last question asked when new techniques become available. Ajax is NOT the be-all-to-end-all of web UI technologies – careless use can negate it’s benefits. Once again, and this point disappears from time to time from the list of top requirements, put the user first. It happened with flash and is now happening again with Ajax and js-heavy sites.
Sometimes we can get a long way down the development road and find ourselves up against a seemingly intractable browser problem. Admittedly less and less so these days. However, the key skill is in spotting these situations well beforehand and weighing up the costs and benefits. Personally, nothing is as boring as multi-version/flavour or browser detecting js development simply for cross-browser support. This is probably a contentious point, and may risk insulting some people, but if you are elevating your own technical gratification, in showcasing the latest cool technique, then you need to have a rethink.
Thanks for the stats, and very useful discussion.
Badcop666 (February 8, 2008 at 3:15 pm)
John, please drop me a line re’ my post. Thx.
Coy Carlson (March 25, 2008 at 2:09 am)
crepidula debasement remiges pellicule colliquate lavehr knag presanctified
Dianni’s Guest House
http://www.thorntons-law.co.uk/web/site/home/home.asp
John Szostek (May 4, 2008 at 7:02 am)
I don’t understand why people don’t consider using a tool like JavaScript Obfuscator (and enable compressing mode, not obfuscation mode) that can minify local variables AND do compressing of PUBLIC API method names too? The output is smaller than produced by Packer, but it does not require decoding at runtime at all (like output produced by Packer requires).
CTAPbIu_MABP (May 5, 2008 at 3:47 am)
Hey here is another way of packing
http://blog.nihilogic.dk/2008/05/compression-using-canvas-and-png.html
Bob (May 19, 2008 at 1:41 pm)
This is a very interesting discussion. Does anyone have any recommendations on how you can measure the performance (or lack thereof) of 3rd party javascript libraries (download and evaluation)which a site may include? This would be very useful to determine how well such library vendors are meeting SLA metrics?
Rob (June 13, 2008 at 9:16 pm)
I am interested about the possibility of keeping JS libraries in clients once they are used once, in order to avoid communication delays for large JS libraries. Java technology allows you to do something of this sort with applets, since you can force them to be kept in the browser’s cache. Java Web Start also allows users to install Java applications in the hard drive. And, somehow, plugins and browser extensions play a similar role. Do you know if there is anything similar for Javascript libraries?
Thanks for any help