A common – and desirable – technique for constructing JavaScript-based web applications is that of progressive enhancement: only providing capable browsers with the features that they are capable of utilizing – but providing incapable browsers with an adequate, albeit degraded, experience otherwise.
This provides the best of both worlds: Users of modern browsers (the majority audience) can get the best experience while those that are using incapable browsers (such as most mobile devices) will still get an interface that suits them well.
There’s one thing, overriding in all of this, however: Progressive enhancement is nearly only ever applied to the JavaScript functionality of web applications. Presumably it’s assumed that if a browser is capable of supporting the desired JavaScript features of an application then it must, also, be capable of supporting the specific CSS styling as well.
One technique that has greatly interested me, as of late, is one employed by the Filament Group (a local design shop here in Boston): Progressive CSS Enhancement. The premise is that progressive enhancement is done with page styling in mind, primarily, rather than from a purely-JavaScript perspective.
This is particularly important for a couple reasons:
- It should be easy to degrade page styling in a manner that isn’t reliant upon CSS browser hacks – this technique makes it so.
- Not all pages utilize heavy JavaScript (and, thus, progressive JavaScript enhancement does not apply to them).
Their technique works as follows: You choose to provide the user with, either, the enhanced or the decreased experience by default. In either case a basic script is run which attempts to verify a couple CSS styling behaviors along with some basic JavaScript functionality (just enough to be able to run the test).
A couple of the CSS techniques that they test for:
- Box model: make sure the width and padding of a div add up properly using offsetWidth.
- Positioning: position a div and check its positioning using offsetTop and offsetLeft.
- Float: float 2 divs next to each other and evaluate their offsetTop values for equality.
- Clear: test to make sure a list item will clear beneath a preceding floated list item.
- Overflow: wrap a tall div with a shorter div with overflow set to ‘auto’, and test its offsetHeight.
With those in place you can pretty safely begin designing a useful CSS-based layout. Note that the experience will only ever be upgraded if all of the tests pass – if any fail then it simply won’t continue. Obviously there’ll still exist some browser discrepancies (like in the differences in the box model between Internet Explorer 6 and most other browsers) but that’s usually an acceptable level of hackage (meaning that you won’t have to deviate much from what you’re already doing).
The actual implementation is quite simple. It consists of a number of JavaScript-based rules that test for behavior. For example the following rule tests for a working box model:
var newDiv = document.createElement('div'); document.body.appendChild(newDiv); newDiv.style.visibility = 'hidden'; newDiv.style.width = '20px'; newDiv.style.padding = '10px'; var divWidth = newDiv.offsetWidth; if(divWidth != 40) {document.body.removeChild(newDiv); return false;}
That check, alone, is able to knock off a number of older browser whom aren’t able to successfully implement that CSS behavior. Currently all the rules are in a large code block, which makes maintenance unwieldily. I think that this library could definitely benefit from extensibility (being able to add/remove rules that you wish to honor).
When it comes time to actually use this technique within your application there are a number of strategies that you can use. However, for the sake of discussion here, let’s assume that you’re sending, by default, the degraded experience to the client (optionally upgrading if the browser is capable). Then you would be able to use these two techniques:
- A class of “enhanced” is assigned to the body element to be used for optional CSS scoping (such as:
body.enhanced {background: red;}
). - Any links to alternate stylesheets that have a class of “enhanced” will be enabled.
In this manner you can specify all of your stylesheets in your header with some disabled (being alternate stylesheets) or with some CSS rules being only applied with the body.enhanced
match.
Their implementation also allows you to only execute JavaScript if all the rules pass – however I’m not sure if that’s an acceptable solution, in this situation. If you want to verify that your desired JavaScript functionality is able to operate then you should check for just that. However, in this case, we can get the other side of the equation: Verifying that CSS works as you would expect it to, knowing that an adequate experience can be provided.
If you’re curious as to which devices are supported by the default rules in the test file you can view the result matrix on the tool’s site.
I definitely think that this technique has a lot of merit, especially in the realm of mobile-accessible web sites. Since it’s virtually impossible to design, and test, your pages to work on such a large number of obscure platforms this degraded strategy is really one that will help to benefit both you, and your users, in the long run.
Lalit Patel (March 15, 2008 at 5:06 am)
Hi John,
I second your thoughts that this technique has a lot of merit and its best application would be for website that are mobile-accessible.
On a side note:
Just wondering how many things a Web-developer/Designer has to keep in mind while designing while making it. I strongly believe that it hampers creativity. We have all these better cross-platform and hi-level languages and libraries coming up to allow the programmer to focus on the logic rather than the language syntax or micromanaging threads and sockets; when will this day come for Web Development. maybe a Java-like write once-run everywhere language which can look/execute the same across all browsers/devices/OS.
I sometimes feel that this is a side effect of the democratic nature of the web. At the same time, I feel this is what makes Web Development more challenging these days ;) Getting things to work everywhere for everyone!
Thanks to your JQuery, atleast it solves the JS part of the problem :)
Cheers!
Aditya Mukherjee (March 15, 2008 at 6:14 am)
My single greatest gripe about methods like this is that it puts the burden of the work on the developer, and not the browser. It ‘does’ matter if the browser doesn’t support standardized CSS selectors and properties. It does matter if it cannot render ‘px’ and ’em’ the same as other browsers. All these are part of ‘standards’, which by definition means common to everyone else. I’ve taken a harsh (but necessary) stand against it, and will continue doing so. We have to get browsers (and yes, I look towards IE when I say this) to listen to developers, not the other way around.
As the desktop browsers spawn their mobile siblings, I’m sure these hack-y methods will not be required since the engine will be the same — means less trouble for browser makers AND web-designers/developers alike. Smartphone browsers that is. Java based WAP browsers on other phones don’t deserve the time anyway ;)
Thanks for the tip, but I wish you’d push the other side.
Larry Roth (March 15, 2008 at 8:08 am)
Nice post John. I like the idea of creating a degraded interface to common feature sets of many different types of devices rather than trying to create unique interfaces for each device or be stuck peppering your code with lots of hacks. I like this approach better than the way I see some Web sites going which is to provide lots of different interfaces (e.g. standards compliant, old browsers, ADA compliant, WAP, iphone, etc… First, it’s an awful lot of maintenance for both developers and designers and second it needn’t be so complicated. Can’t wait for that day when everything is standards compliant!
Jeroen Coumans (March 15, 2008 at 8:19 am)
Rather than progressive enhancement I’d like to see the principles of graceful degradation applied to CSS. Which is just the other way around: provide the most rich, advanced experience as possible, and degrade gracefully for less capable user agents.
timothy (March 15, 2008 at 10:54 am)
That’s a nice goal in theory, but it’s much harder for visual designers and QA. Planning for and testing just two outcomes (rich or not) is much easier, and is more likely to end up with something that works. It also keeps developers focused on making sure those two outcomes are both good ones.
So I guess I’m an advocate for “you get the good stuff or else you don’t,” as long as both outcomes are built the be the best they can be.
If someone takes the piecemeal path (depending on your browser, you get some of the good stuff), they really need to QA every possible combination of switches, because you never know what you might get on some future weird cell phone.
I don’t think someone with IE5.5 expects to get the latest, best stuff. How could they? It’s really the funky little devices with non-standard browsers that stump us most often.
The good thing about a test like this catching on is that it gives the mini-browsers a goal to shoot for…
“Shit, all these websites are dropping down because we don’t pass this one test. These sites are cooler on all the competing phones.”
“Well we’d better fix our browser then!”
The test could help get all these mobile browsers up to snuff. Again, I’m assuming that the less fancy path is still clean and acceptable. Some web sites show up as blank on some browsers.
Jake Harvey (March 15, 2008 at 10:56 am)
Actually you have it backwards Aditya. By simply splitting it in two this process creates less work for developers. Instead of using hacks to try and make all the browsers play nice you tell the less compliant, “here’s the content but it’s not gonna be as pretty.”
John Resig (March 15, 2008 at 11:17 am)
@Lalit: I don’t think it hampers it – although it certainly is more challenging. However there will always be an associated cost with attempting to support older browsers. If you simply don’t care about old browsers then, yes, doing anything will be more difficult than doing nothing. However, with this technique in hand, it at least makes it bearable.
@Aditya: I’m not really sure that we have a choice as to where the burden goes – old browsers will never be updated and people will continue to use them (albeit in dwindling numbers). This particular technique just makes the process of targeting them (and mobile phones!) simple and painless – which is how it should be.
@Larry: Absolutely – I think that at a certain point you just have to pick a feasible solution and run with it. Supporting two groups (“Fully-capable browsers” and “Old and incapable browsers”) at least makes the process a bit simpler.
@Jeroen: I didn’t mention it but this technique also supports graceful degradation (it supports both techniques – you can build up or tear down, you choice). To quote their site:
@timothy: I think it’s definitely the goal that the site should show up well (using clean HTML markup, simple CSS, and reasonable images). Doing this will make targeting mobile browsers quite easy.
Scott Jehl (March 15, 2008 at 11:22 am)
John, thanks for a nice review of our technique.
I think it’s important to point out that it’s not so much about what is being tested, but that there is a test taking place at all. It’s the idea behind testing for capabilities before using them that is the important takeaway here.
John has brought up an interesting point that it may be a bit odd to test mainly for CSS capabilities before providing both CSS and JS enhancements. I agree that the test can AND should be extended to test a whole lot more (with performance in mind of course). As it stands now, we found that it creates a favorable divide between those who get low and hi-fidelity experiences (with CSS and JS), but it can and should evolve.
In response to the last few comments:
I think the distinction between enhancing and degrading is a great conversation to have. Of course it has been discussed many times in the recent past and our team at Filament feels strongly that the concept of enhancing an already functional experience is the only sure way to provide a usable application to any device that comes our way.
Regardless of your opinion on the topic, let’s keep positive about it while we discuss the merits of each approach.
@Aditya and Jeroen: While I agree that it would be easier for developers to simply code to the best case scenario, it often doesn’t play out well in degradation to older devices. While it may be true that adding a CSS3 text drop shadow will be harmless to older devices that simply don’t understand it; it’s a whole different scenario when devices get an entire layout wrong by improper box model handling, floating, clearing, etc. What we’re MORE concerned about here is not whether it’s pretty, but rather whether it is usable at all.
Let me reiterate though, it’s great that this is being discussed. I’m certainly not saying it is an easier path to code from the bottom up, but reaching the broadest audience possible with our applications is a goal we certainly all can appreciate.
Tyler Karaszewski (March 15, 2008 at 11:59 am)
The problem with this test is that it’s not a CSS test. It’s still a javascript test. You could have a fully CSS3 compliant mobile device (or any other device for that matter), that doesn’t include a javascript interpreter (possibly for performance or security reasons).
And so, your tests would all fail, not because the device doesn’t support the CSS box model properly, but because “offsetWidth” isn’t implemented in it’s javascript interpreter.
If it was guaranteed that all these old devices had a reasonably standard javascript implementation on them, then this test would be great, but i’d imagine that’s far from the actual case.
Turning off CSS because of poor javascript support doesn’t seem to make much sense to me.
Dinoboff (March 15, 2008 at 12:18 pm)
It can also disable style-sheet when the test fail which mean that a modern browser with JS off can have the enhanced design, but it also mean that a old browser with js off or with limited js capability would try to render the enhanced design as well.
If JS was required for having the enhanced design, the js file would need to be on a wildly trusted server so that people with NoScript don’t have to allow js for the website just to get the enhanced design.
Sander Aarts (March 15, 2008 at 12:20 pm)
I have to agree with Tyler for general use.
But when it comes to dynamic changes to the layout that are triggered by JavaScript, only testing support for the used JavaScript methods is not good enough.
@John: thanks for posting about this.
John Resig (March 15, 2008 at 12:21 pm)
@Tyler and Sander: I think that’s an acceptable loss. In my case, at least, I think I would be happy to consider a browser that has JavaScript disabled (or unimplemented) to be a “fringe/unsupported” browser. Because, really, unless you’re actively targeting that demographic it probably should receive that degraded experience. That’s not to say that the degraded experience should be poor, or unusable, just that it won’t, explicitly, be as easy to use as the full experience (maybe they have to use a drop-down instead of a slider – or a list instead of a menu).
In reality the test is two things: A test for implementation of specific aspects of CSS styling and a test for the browser’s ability to interpret that styling via JavaScript. Failing either one of those is perfectly safe to push a browser into the unsupported realm.
Of course if there was a browser that you did, actively, want to support that didn’t have any JavaScript capabilities (whatever that may be) then, obviously, you would have to target that specially. However I definitely suspect that your “full experience” won’t be applicable on a device of this quality.
@Dinoboff: You missed the ‘progressive enhancement’ aspect of this implementation. You (as the developer) have the option of providing the user with the degraded experience first and upgrading if they support the full version (which is what I recommended in the blog post) or you can provide them with the full experience and then downgrade. Both techniques are implemented it’s simply up to you to choose which is acceptable. Having the low-to-high upgrade path means that if a browser doesn’t support JavaScript it will get the degraded experience (which is good) whereas having the high-to-low downgrade path will mean that there’s less of a performance cost for the majority of users.
Dinoboff (March 15, 2008 at 12:26 pm)
Have you notice that it check if jQuery is used http://72.47.209.59/examples/testUserDevice/testUserDevice_UNPACKED.js:
function enhancedDomReady(func){
function bodyReady(){
if(document.body){
clearInterval(checkBody);
if(testUserDevice()){
enhanceDocument();
//forward functions to domReady event with available library
if(jQuery){$(function(){func();});}
else indepDomReady(function(){func();});
}
}
}
var checkBody = setInterval(bodyReady, 10);
}
However it doesn’t really get used. That made me think that most JavaScript work needs a cross-browser DomReady function, maybe one to get elements by class name, one to get/set DOM attributes, and one to add/remove events. If that’s all that is is needed, you won’t write it as a plug-in for a particular library. Writing these functions yourself will be lighter and won’t force your user to use one specific library.
However that’s kind of a waste to rewrite these functions when your work will be used on pages that might use a library anyway, it might even break if used with some libraries.
Couldn’t Dojo, jQuery, mootools, Prototype and YUI agree on an interface for these basic needs and to implement it? so that we can write library agnostic JavaScript without having to reinvent the wheel.
Dinoboff (March 15, 2008 at 12:35 pm)
I agree that low-to-high is better but having FF2 with noscript users getting the same experience than IE5 users is not that great. However if the tester js file were hosting on a server that most Noscript allow, that would be fine.
Scott Jehl (March 15, 2008 at 12:36 pm)
@Tyler: You’ve make a good point. An application’s user base is going to affect the way you develop and if your particular audience is known to have uncommon restrictions such as those you mentioned, you may take a different approach to reaching your user base (such as not using javascript at all!).
However, most of the Ajax apps we build nowadays utilize a tight synchronization of both CSS and Javascript. Therefore, if a user has javascript turned off for some reason, we’d much prefer they get the still usable low-fi version of our app. An example of this would be custom form widgets that require good JS support to behave properly, and good CSS support to be presented properly.
Here’s why we think this test is a good approach:
1. We develop our applications – first and foremost – to be functional and usable even if you don’t have CSS and Javascript. This is the first user case. It’s most important that we make our application’s resources accessible to the broadest audience possible.
2. Then, if your device proves capable, we layer on enhancements for a richer experience that utilizes those same resources (read: one code base). This makes a much more optimized experience for those deemed capable of handing it.
We’ve found that this is a great way to ensure that our application is usable to all who visit; whether they’re using a really old mobile device, the latest beta drop of FF3, or even the latter with some features disabled.
Scott Jehl (March 15, 2008 at 12:43 pm)
@ Dinoboff: Good catch. We have this in there for a couple reasons. First of all, we use jQuery for most of our work at Filament Group, so we’ve utilized its domReady function if it’s available. Second, jQuery’s domReady function is sure to be more up-to-date than our included standalone function. For that reason, we default to it if it’s available.
I agree though that a common way to forward along to domReady in all major libraries would be a great improvement. If nothing else, we could provide several conditionals in there to see which library is attached, and if it’s there, use it.
John Resig (March 15, 2008 at 12:44 pm)
@Dinoboff: I’m not completely sure what your concern is, here. It seems like you are having an issue with the fact that this library provides a basic DOM Ready implementation – and that you wish this could be standardized across libraries so that this wouldn’t have to be the case.
I think it’s a little bit short-sighted to try and paint all JavaScript libraries with a unifying brush when, in fact, the differences between them are often quite complex. In the case of DOM Ready, for example, jQuery, not only, waits for the DOM to be ready but also waits for all CSS styles to be applied. No other library does this. How do you decide which features to standardize upon? For example if all the other libraries agree that there should be a DOM Ready function – but it shouldn’t have CSS-waiting support, then jQuery wouldn’t use it. The entire situation is quite tricky because it’s an all-or-nothing proposition, then we’re back to where we began.
The closest solution I think you’ll find is the Open Ajax Alliance in which they’re attempting to standardize some portions of key functionality.
@Dinoboff: I would only want Firefox 2, with NoScript, to get the full experience if I was actively testing against it. That’s not the case, in my experience, and I suspect that it’s similar for most other developers as well.
timothy (March 15, 2008 at 1:02 pm)
@Scott,
That’s a great point. My app does extremely heavy financial statistics in JavaScript. It’s tightly integrated with HTML and CSS that I display the results with (and I’m creating the HTML and CSS in JavaScript as I go). I simply can’t afford to have the amount of work I can distribute to a JS client done on my server–I’d go broke in a week and my users would be hung up while my server did minutes of intensive calculations per user.
I do still have to give my diminished users something. I have to give reasonable responses with a reasonable display, but it’s obviously going to be less expensive calculations and less interactive displays.
In my case it’s very easy to say, this is a cutting edge rich internet app that you can get, say, on browsers that were released in about the last 5 years, or as an Adobe AIR app (and maybe iPhone and Android). _Otherwise_ I can give you _something_, but you and I both know you’re not getting everything you could. I’ll make both experiences the best they can be, but there’s really no useful half-way experience I can even spec out.
In the FAQ, I’ll explain the many ways to get the full experience, but I don’t want to flood the user with insults for using the “wrong” browser.
I suspect more and more apps will be like mine in the coming years–apps that replace a professional desktop app, and absolutely rely on JavaScript to do calculations, not just page enhancement of the display of things calculated on the server.
I can imagine an online Photoshop paint program arriving in 5 years, programmed in JavaScript or ActionScript (or IronRuby in Silverlight, or Script#, which is C# translated to JavaScript). I can’t imagine the point of one where you use a form to tell the server what pixel to turn what color.
So, yeah, in my case it makes complete sense to go boolean–rich internet app or substitute. I buy that the more common web page currently is _not_ a real app, but more likely a web page that can have various improvements applied to it piecemeal.
Scott Jehl (March 15, 2008 at 5:02 pm)
@Timothy: Good points, thanks. It’d be worth adding that the test itself doesn’t decide where the line is drawn between experiences; that part is left to the developer. For example, you may have some simple javascript or CSS enhancements that you are comfortable delivering to all users, pass or fail. There are multitudes of ways the test can be used to provide a number of enhancements. The test just allows us to safely develop advanced features in – as Chris Heilmann said) – a defensive way.
@everyone: Thanks for the thoughtful comments so far. If you have ideas for how the test can be improved, please send us your thoughts.
Diego Perini (March 16, 2008 at 1:11 pm)
A nice way of profiling browsers capabilities, but too costly for the informations returned, even if the browsers says the final box is 40 pixel we have no proof of this being real, this is the kind of inconsistency and bugs that make our wonderful layouts break.
I for myself try to use pixel sizes only as a last resort anyway, so percentages or “em” may still be broken even if I used that trick.
In the table I found on “filamento”‘s site I just applied a:
cat capabilities.txt | cut -f2,3,4 | sort
I believe the usable result from that collection of browsers/versions may be expressed in jQuery as follow (netscape & konqueror are missing in jQuery.browser but anyway to do a point):
if (
(jQuery.browser.msie && jQuery.version
The exception remaining just Mozilla 1.7 still failing on Mac.
I understand that with the parameters passed as arguments something better could be done, but remember that capabilities often are needed in combination, parts will be in DOM and parts will be in CSS/HTML, any of these parts missing will normally render obsolete the others too.
I have used CSS to have test-points too, but wouldn't rely on CSS "pixels" to conditionally enable my widgets.
timothy (March 16, 2008 at 3:03 pm)
>>I believe the usable result from that collection of browsers/versions may be expressed in jQuery as follow (netscape & konqueror are missing in jQuery.browser but anyway to do a point):
Yeah, but the jQuery browser and version check only helps us out now. It doesn’t help us figure out if future cell phones and weird little Linux boxes have a browser that does what we expect with CSS.
I guess we just have to play the odds and future-proof the best we can. We still might have to end up rewriting stuff, of course. Someone can always find a way to break us.
I do like the idea of a somewhat standard test to encourage the browsers to become more like each other. I guess a really weak acid test is what I’m thinking of. It’s too much to hope that lame little cell phones pass real acid tests for the foreseeable future. But someday they will. Without the tests, there’s no clear target forcing them to converge.