A big push at Khan Academy has been to write more-and-more exercises for students to practice with. Naturally, to increase the number of exercises that we have, we needed to make it easier for team members, and casual committers, to write them.
Brand New Framework
When I started just over two months ago I immediately went about rewriting the framework to use semantic HTML and not rely upon any server-side components whatsoever. To provide an example of the change that was made here is one particular exercise, Significant Figures 1, written for the old framework and written for the new framework. You can view the live version of the exercise, as well.
Significant Figures 1 went from 191 lines of code down to 58.
Note the dramatic lack of “code” – most logic is encapsulated and reduced to a series of variable declarations (some with minimal logic). Additionally utility functions were made significantly more useful, and generic, shared across exercises (some of this was done in the old framework but a complete rewrite allowed us to find areas of obvious overlap).
The result of the new framework is an interesting mix of semantic markup and logic – wrapped in a form of highly specialized templating. Tons of details regarding the specifics of the framework can be found in the Writing Exercises tutorial.
In short, some of the cool new features of this framework include:
- A full module system for dynamically loading specific utilities (such as graphing).
- A brand new templating framework, including templating inheritance, that makes it easy to reduce repeated code in exercises (some examples).
- A brand new graphing framework (written by Ben Alpert, named ‘Graphie’) that’s built on top of Raphael. This framework provides a simple API for creating all sorts of graphs and charts, it’s used extensively throughout the exercises.
- An extensible plugin system for creating new answer types (to expand beyond basic multiple choice or fill in the blank).
Of course, that’s saying nothing of the actual user interaction as well. Most of the new framework’s logic is contained on the client-side and is generated directly off of Khan Academy’s REST API.
By moving logic to the client-side we’re able to have a much more interactive experience (doing all problem generation on the client-side, doing Ajax-y background submissions, things of that sort).
A big concern for us, with this rewrite, was to ensure that the quality and integrity of exercises was maintained well into the future. We wanted to make sure that the content of the exercises were correct and that regressions do not occur over time.
In fact we have three levels of testing:
- We have unit tests for our modules and utility functions (ensures that stuff like the templating will continue to work correctly).
- We have an interactive interface for generating unit tests for any given exercise (details below).
- We have a “Report a Problem” interface for sending post-deployment issues directly to our bug tracker.
The second stage is particularly interesting. On a fundamental level it’s hard to unit test something as complex as an exercise. While we can ensure that they still work at a functional level we can’t verify if the problems make sense, or if graphs look correct, or if it’s even possible to pick the right answer.
For these reasons we’ve introduced a new testing interface in which we use human developers as unit testers. When in test mode an additional panel is displayed asking the user to provide additional details regarding wether the problem is working correctly, or not. We ask them to really dig into the exercises and make sure that they’re working correctly, hints are being displayed properly, and other things of that nature.
At the moment we’re working on ways of having better reporting and integration with our development process, for all those unit tests.
I’m rather pleased with how the overall development process worked out. We actively kept both the code open (it’s up on Github and it’s MIT licensed) and the process open as well. We’ve had contributions from over twice as many outside contributors than members of the Khan Academy team (although the vast majority of the development was done by members of the team).
We have a detailed Getting Involved guide that explains how exactly anyone can participate in the development of the framework (or in the authoring of new exercises).
Next Steps and Thanks
There are a few things that we’re looking to do in the near future: Bring offline support to the exercises (for those with spotty internet connections or those using the upcoming iPad/Android application), client-side translation of content into other languages, and exercise answer checking using Node.js and JSDOM.
I also want to take this opportunity to thank everyone at Khan Academy, and all the community contributors, for helping to make this project happen. I especially want to thank Marcia Lee, Ben Alpert, Jeff Ruberg, Igor Terzic, Omar Rizwan, and Ben Kamens for their help in getting this out the door.