This article briefly describes a little piece of theory behind testing of standalone front-end projects, issues that you are likely to meet and the solution I came up with. Here’s the shortcut https://github.com/inossidabile/grunt-contrib-testem if you are already bored so far ;).
Part 1. Introduction
If you know what Jasmine, Mocha, PhantomJS and Grunt are, skip to Part 2.
Let’s say we have a framework. Then we can manually create HTML file, include JS we want to test, open it in a browser and, well, test. It’s certainly a kind of automatic testing already but still so far away from something reasonable. And the first thing to think about is Continuous Integration. You can only run such tests manually and see the results with your eyes. No “on-commit runs”, no Travis integration. Sadness.
Phantom is a thing that solves that. It’s a headless invisible browser that you can control programmatically. Like this for example: https://github.com/ariya/phantomjs/blob/master/examples/colorwheel.js. Phantom will play the role of our eyes and hands — it will open a page, check the results and pass them back to a “script runner”. The “script runner” that can run on commit or at Travis.
And the “script runner” in its turn is Grunt. Did you use Grunt before? Go and try if you did not — it’s incredible. Grunt comes with sample project showing its main features. And guess what? It has testing section! The sample project uses QUnit. Here we see:
- hand-made HTML testing playground
- single test file
- external plug-in grunt-contrib-qunit…
- …and its configuration
Unlike this one, most of real projects do not keep HTML playground manually crafted. They use another (more powerful) Grunt plug-ins allowing to generate that on the fly. Combining all that we have a Grunt task that:
- generates page with test
- runs Phantom
- grabs the result
- prints it to console
- exits with proper code (whether test succeeded)
Part 2. Basic stuff and issues
Have you already automated tests that run in both, console and browsers? If you already know the pains and just want a cure — pass on to the Part 3.
It’s pretty easy to organize tests like its described in Part 1. It’s a common way to solve the issue and there are tons of ready-made Grunt plug-ins for any framework no matter which one you use. Seriously, take a look at these for example:
Alright! Isn’t that simple? Install the plug-in, drop couple lines into a config, add proper runner to
.travis.yml. That’s it. Flawless victory. Victory? Doh…
Single runs are working now. But that’s just a start. To keep development process away from the “switch a window” game runners are supposed to watch modifications of the test files and restart tests automatically. Here comes grunt-contrib-watch! Right? Another entry to config and we are done — Grunt keeps process running and every time you save a file it runs test runner from scratch.
In simple cases (if we forgive watch its bugginess) it saves indeed. Let’s just keep it in mind for now: we have the development mode and it utilizes watch internally.
Libraries happen to be big. And sometimes even huge. But even mid-sized libraries typically use more then one file. And as you know front-end JS can’t bundle itself, something has to help it (and this “something” probably runs from Grunt as well).
It means that before we actually test our code we have to bundle it. No probs you say, Grunt can hook tasks. We just make it run bundling task before the tests run. And now do you remember we have a development mode with watch? So we have to bundle code every single time we press Save in the editor and/or file changes. How long does it take to bundle your code once? ;)
With such approach watch really starts to drive crazy. It misses saves, crashes and every time you look at the console with the results of tests you have literally no idea — WHAT are you looking at. Are those results of the latest tests? Or is it bundling right now and the new run is yet to come? Did it even catch the last save? Finally you end up switching back to manual runs.
But even if you don’t we are still not there yet. If your application is in CoffeeScript or another dialect, you probably use the same language for specs. So you have to compile them too. Now you have to compile both — your app and EVERY test file you have on EACH test run. Should I say there are can be MUCH more test files than application files? So how long did you say it takes to bundle your code?
Wait. Can’t we only recompile files that actually changed? Not really. watch simply can’t do that. And none of existing workarounds help with modern version unfortunately. The only thing that works (if you can call that works) is full recompilation on each change.
Run with something else but Phantom
In the real life nobody is going to use your code in PhantomJS. From time to time you have to check it with real browsers anyway. To do that we should manually open HTML that was generated by runner in the browser we target. If you are unlucky enough to deal with things that behave differently in different browsers you get back to the start. To the “switch a window” game.
It’s not a 100% of cases for sure. Not even 50% of them. Is that what you might be thinking. At least so did I before I experienced it for the first time. And the circle has closed.
Part 3. Testem, Mincer and the way they integrate
Testem is simply awesome. Really. It’s so incredible I can’t even describe what I felt when I tried it first. Just watch this:
Testem completely removes the difference between headless console runs and real browsers. Things just get bundled into a big ecosystem with single large green “CHECK” button. And I was happy until I tried to use it with a real project…
The marketing lies! Well… A little bit at least. Testem says it supports preprocessing. No it does not. I mean it does in some way — it allows you to run custom bash command before each test run and after that. It states it’s possible to do anything from command line. Well… Technically it is. It’s also technically possible to cross an ocean riding a dolphin.
But I didn’t give up! Despite this limitation Testem still has a lot of stuff to support. At least we are going to solve the problem with manual browsers checking. This alone is a huge step forward. Yet another disappointment — Testem absolutely is (was!) not adapted for external programmatic integration. It’s all kinda selfish and independent. So I did this:
I wrote a Grunt task that was running Testem that was running bash script that was running Grunt that was compiling Coffee.
We need to go deeper!.. This approach appeared to work even worse then before. So I took scalpel and forked Testem.
Testem is perfect when it comes to:
- Support of different testing frameworks
- Headless runs using ready JS files
- Watching set of files to rerun tests automatically
- Integration with real browsers of your OS
In couple days me and Toby approved and introduced all the required API modifications. New version of Testem can:
- Accept configuration from API calls. Config file is not required anymore.
- Accept hooks as JS functions (instead of bash strings that run X that runs Y that runs…)
- Pass data to hooks
on_changeevent when Testem notices file modification
- Include JS from URLs not just paths
- Override forced process destruction in the end of tests
Okay then. Here I come, storage. At 1.2 branch of Joosy we have adapted Mincer to manage internal dependencies. And that’s exactly the storage we need in fact. It suits us simply perfect. Here is the resulting workflow:
- Start connect.js on the port X and serve Mincer middleware
- Take a list of paths including paths to Coffee, CoCo (anything Mincer can handle), expand UNIX masks and build the resulting list of files that Testem should watch modifications for
- Map list of files to the list of URLs: http://localhost:X/path
- Pass watch and serve lists to Testem and run it.
As the result Testem watches modification of original files but it doesn’t include them directly. Instead it includes them through Mincer that is listening the neighbor port. And Mincer in its turn handles all compilations and caching.
I have to say here that Mincer isn’t just fast. It’s incredibly smart when it comes to caching. You can rest assured that at any moment you get actual code for any file. But what’s really important it has nothing to do with Testem. Even if it takes a while to compile all your code (which happens on the first run) — it’s browser that waits. It makes Testem watcher feel relaxed and work well. It also means that at any time you open console — you can be sure you see the latest results. You’ll just see zero progress if it’s compiling right now.
All this stuff is wrapped into a Grunt plug-in. All you have to do to start using it is to install https://github.com/inossidabile/grunt-contrib-testem and list files you want to test at the config like this:
1 2 3 4 5 6 7 8 9 10 11 12
And this time it is finally likely to work well. In my case it made me run out of issues with front-end tests completely. I even had to start enjoying this process in fact. What about you?
Please send kudos to incredible authors of Testem and Mincer: Toby Ho, Vitaly Puzrin and Alex Zapparov. Not only they created something valuable but also keep maintaining it so well. I had a chance to interact closely with both of projects. They really deserve it :).