We’re starting to undertake a few new initiatives, here at Mozilla that attempt to find ways to benefit web developers – and by extension, JavaScript Libraries. I think this is an excellent movement, so I’m doing everything that I can to support it and push it forward. With that in mind, here’s an introduction to one of the first initiatives that we’re undertaking.
JavaScript libraries can be fickle beasts. Generally speaking, they attempt to pave over browser bugs and interfaces, providing a consistent base-layer that users can build upon. This is a challenging task, as bugs can frequently be nonsensical – and even result in browser crashes.
There are a number of techniques that can be used to know about, and work around, bugs or missing features – but generally speaking, object detection is the safest way to determine is specific feature is available, and usable. Unfortunately, in real-world JavaScript development, object detection can only get you so far. For example, there’s no object that you can ‘detect’ to determine if browsers return inaccurate attribute values from getAttribute, if they execute inline script tags on DOM injection, or if they fail to return correct results from a getElementsByTagName query.
Additionally, object detection has the ability to completely fail. Safari currently has a super-nasty bugs related to object detection. For example, assuming that you have a variable and you need to determine if it contains a single DOM Element, or a DOM NodeList. One one would think that it would be as simple as:
if ( elem.nodeName ) { // it's an element } else { // it's a nodelist }
However, in the current version of Safari, this causes the browser to completely crash, for reasons unknown. (However, I’m fairly certain that this has already been fixed in the nightlies.)
I was in the group of JavaScript developers who provided feature/bug fix recommendations to Microsoft for their next version of IE. A huge issue that we were faced with was that we were knowingly asking Microsoft to both break their browser and alienate their existing userbase, in the name of standards.
For example, if Microsoft adds proper DOM Events (addEventListener, etc.) – should they then remove their IE-specific event model (attachEvent, etc.)? Assuming that they do decide to remove the deprecated interfaces, this will have serious effects upon JavaScript developers and libraries (although, in the case of the DOM Event model, object detection is a viable solution and is, therefore, completely future-compatible.)
Additionally, in Internet Explorer, doing object detection checks can, sometimes, cause actual function executions to occur. For example:
if ( elem.getAttribute ) { // will die in Internet Explorer }
That line will cause problems as Internet Explorer attempts to execute the getAttribute function with no arguments (which is invalid). (The obvious solution is to use “typeof elem.getAttribute == ‘undefined'” instead.)
The point of these examples isn’t to rag on Safari or Internet Explorer in particular, but to point out that rendering-engine checks can end up becoming very convoluted – and thusly, more vulnerable to future changes within a browser. This is a very important point. A browser deciding to fix bugs can cause more problems for a JavaScript developer than simply adding new features. For every bugfix, there are huge ramifications. Developers expect interfaces to work and behave in very-specific ways.
The recent Internet Explorer 7 release can be seen as a case study in this. They fixed numerous CSS-rendering errors in their engine, which caused an untold number of web sites to render incorrectly. By fixing bugs, shockwaves were sent throughout the entire web development industry.
All of this is just a long-winded way of saying: Browsers will introduce bugs. Either these bugs are going to be legitimate mistakes or unavoidable bug fixes – either way, they’ll be regressions that JavaScript developers will have to deal with.
At Mozilla, we’ve looked at this issue and Mike Shaver came up with an excellent solution: Simply include the test suites of popular JavaScript libraries inside the Mozilla code base.
Doing this will provide, at least two, huge benefits:
- Library developers will be able to know about unavoidable regressions and adjust their code before the release even occurs.
- Mozilla developers will be able to have a massively-expanded test suite that will help to catch any unintended bugs. In addition to making sure that less, general, bugs will be introduced into the system, library authors and users will be content knowing that their code is already working in the next version of Firefox, without having to do any extra work.
What progress has already been made? Mochikit‘s test suite (Mochitest) is already a part of Mozilla’s official test suite (it’s used to test UI-specific features). I’ve already touched base with Alex Russel, of Dojo, and I’ll be working to integrate their test suite once Dojo 0.9 hits. Perhaps unsurprisingly, I’ll be working to integrate jQuery’s test suite into the core, too. Additionally, I’m also starting to contact other popular library developers attempting to get, at least, a static copy of their test suite in place.
Note: This initiative isn’t limited to straight JavaScript libraries. If you have a large, testable, JavaScript-heavy, Open Source project let me know and I’ll be sure to start moving things forward. For example, some form of testing for Zimbra will probably come into play.
In all, I think this is a fantastic step forward – and a step that really shows the immediate benefits of having an open development process centered around browser implementations. I hope to see other browser manufacturers catch on too, as having universally-available pre-release library testing will simply help too many users to count.
Tobie Langel (March 1, 2007 at 3:34 am)
That’s an excellent and exciting initiative.
Let me know if there’s anything you need to include the Prototype test suite. I’d be happy to look into it.
John Resig (March 1, 2007 at 10:31 am)
@Tobie: Absolutely. I know that you guys use rake to generate the test suite, so I suspect that just a final, static, version of the suite would be a perfect candidate for inclusion. I’m fairly certain that this is feasible, but if you could just fact-check me, that’d help!
Dean Edwards (March 1, 2007 at 12:52 pm)
This is an interesting topic and one that I have been thinking about a lot. If you write a JavaScript library that fixes today’s problems will it still work in a year? Two years? Ten? From a library’s perspective you need a combination of browser sniffing and object detection plus a vague knowledge of what might be implemented in the future. Not easy.
Tobie Langel (March 1, 2007 at 1:57 pm)
John,
All of Prototype’s test suite, except for the Ajax stuff, could be merged into a static file.
The Ajax testing needs things like HTTP headers which AFAIK requires a server. We currently use WeBrick, but the tests could be ported to use something else if needed.
The Ajax tests can still be run as static files: tests which depends on HTTP headers are simply skipped.
I suspect most libraries do have similar testing needs for Ajax, so maybe deciding on a “standard” protocol for those would be a good thing.
John Resig (March 1, 2007 at 2:35 pm)
@Dean – I suspect that anything that doesn’t use pure object detection would be subject to failing in the future (if left unmaintained). Of course, simply due to the sheer number of workarounds that many browsers require, I can’t imagine most libraries being able to withstand a DOM-compliant IE 8 (assuming that it has its own set of bugs and misgivings – and that libraries weren’t able to adapt).
@Tobie – Yeah, that sounds perfect. For now we can just focus on the low-hanging fruit and work our way up to more-complex situations as we go. I know that with jQuery we require a web server to test our Ajax code (as a lot of it requires interaction with dummy PHP scripts, and such, to test client output) – so I suspect that the need will be similar, elsewhere, too.
Mike Shaver (March 2, 2007 at 1:03 am)
The existing test suite has and uses an HTTP server written in JS to provide controller server-side responses, so I suspect we can adapt it here. (I’m personally less concerned about verifying those server-interaction cases, from our perspective, but we could get there in time.)
Matt Kruse (March 2, 2007 at 12:00 pm)
Being future-proof is one of the reasons I cringe when I see browser detection code, even in a library like jQuery. You may know that IE7 has quirk X that needs to be corrected right now, but will it still exist after IE7 SP1 and the January 2008 patch? If all you’re checking for is ie7 and assuming behavior, you aren’t very future-proof.
Object Detection is the simplest cross-browser test, but if that fails to expose a quirk or bug in a specific browser, shouldn’t you resort to Feature Correction before Browser Sniffing? That is, actually use the feature being tested (if it exists) and then examine the results to see if they are as you expect. If not, then perform the correction. This way, if a future browser update fixes the problem, your feature correction will not be needed at that point. Feature Correction usually involves more code than simply resorting to browser sniffing, but in the end it’s more robust, IMO.
Anyway, the effort you are talking about is fantastic. You’re a great asset to the JavaScript community, and having you work with Mozilla only reinforces the direction and future stability of the jQuery framework.
Tobie Langel (March 2, 2007 at 12:54 pm)
Feature correction is unfortunately not always possible… and it has a major caveat: considering the amount of bugs of some browsers it implies excessively long load times.
And it’s certainly not an option for any ajax related issue.
Martin (March 2, 2007 at 11:22 pm)
What is better, limiting bug fixes to IE7 initial version and thous possibly breaking things in IE7 SP1, or just checking for IE7 and possibly breaking things in IE7 SP1? You will have to check your code with the SP1 release in both cases.
And here is something where IE is actually better than Firefox. With its conditional compiles, you can determine the browser version without user-agent sniffing. Although you cannot differentiate between the service packs in IE6, unfortunately.
Opera has window.opera.version, which is even better.
I would love to find out if the browser is Firefox 1.x or 2.x. E.g. the later does not have a visual jump when going from opacity less than 1 to 1. This is one case where feature detection and feature correction do not help.
Bank zdjec (August 6, 2007 at 10:43 am)
The Ajax testing needs things like HTTP headers which AFAIK requires a server. We currently use WeBrick, but the tests could be ported to use something else if needed.
The Ajax tests can still be run as static files: tests which depends on HTTP headers are simply skipped.
Robert (August 9, 2007 at 12:15 am)
The Cause? part of the problem is the obvious defects at the application level – actionscripts/javascripts that are shiite, and using the damn Flash as a video delivery instead of a proper codec like h.264) …. but i suspect that theie are deep system issues in the event model & the thread model for osx itself that are also to blame.Thanks..!
Tomasz Gorski (August 12, 2007 at 8:29 am)
Being future-proof is one of the reasons I cringe when I see browser detection code, even in a library like jQuery. You may know that IE7 has quirk X that needs to be corrected right now, but will it still exist after IE7 SP1 and the January 2008 patch? If all you’re checking for is ie7 and assuming behavior, you aren’t very future-proof. Thats whay it works how it works!
kadry (August 28, 2007 at 5:07 pm)
his is an interesting topic and one that I have been thinking about a lot. If you write a JavaScript library that fixes today’s problems will it still work in a year? Two years? Ten? From a library’s perspective you need a combination of browser sniffing and object detection plus a vague knowledge of what might be implemented in the future. Not easy.
Depilacja laserowa (August 30, 2007 at 9:19 am)
Report all security bugs in IE7 directly to [email protected]. Regards.
AlexeyGfi (September 3, 2007 at 1:11 am)
Hello!
Thank’s for topic.
As to me: when I write on js, I in a code check all doubtful places.
For example,
1. Check on an empty array. If we receive an array by means of the command: «split» it is difficult to predict what the array will turn out. Therefore I use the following check:
if (
arrayToCheck.length == 0 ||
typeof (arrayToCheck [0]) == «undefined» ||
arrayToCheck [0] == «»)
{
//array it is valid empty))
}
2. Precisely also with objects
var divObj = document.getElementById ("divContainer"); if (! divObj) return;
//the List of properties:
for (var prop in divObj) {
//prop - property
//divObj [prop] - property value
}