User-generated performance results are a dual-edged sword.
There’s a ton of examples of misinformation, relating to browsers, within the “Browser Comparison” results. I’m just going to list a bunch of issues – showing how much of a problem user-generated browser performance data can be.
- Both Safari 2 and Safari 3 are grouped together, which is highly suspect. By a number of measurements Safari 3 is much faster than Safari 2, so having these two merged doesn’t do any favors.
- Firefox 3 only has two results. A commenter mentioned that this was because they were being grouped into the “Netscape 6” category – which, in and of itself, is a poor place for conglomeration.
- IE 7 is shown as being faster as IE 6. This may be the case, however it’s far more likely that users who are running IE 7 are on newer hardware (think: A new computer with Vista installed), meaning that, on average, IE 7 will run faster than IE 6.
- Firefox, Opera, and Safari for Windows users are, generally, early adopters and technically savvy – meaning that they’re, also, more likely to have high performance hardware (giving them an unnecessary advantage in their results).
- No attempt at platform comparison is given (for example, Safari Window vs. Firefox Window and Safari Mac vs. Firefox Mac). Having the results lump together provides an inaccurate view of actual browser performance.
There’s one message that should be taken away from this particular case: Don’t trust random-user-generated browser performance data. Until you neutralize for outstanding factors like platform, system load, and even hardware it becomes incredibly hard to get meaningful data that is relevant to most users – or even remotely useful to browser vendors.