Test-Driven Development (TDD) is a software development process we use in all of our projects. Its focus is on testability. The basic idea is that the developer will write a test for the functionality he wants to add first, and then implement it. This way, only “useful” and tested code is written. It also avoids one of the pitfalls of classical V-model software development processes, where it’s easy to end up with untestable code as test strategy is taken into account much later.

The process can be described as follows:

  1. 1. Add a test
  2. 2. Run the test and make sure it fails. If it doesn’t fail, it doesn’t test anything as the functionality it’s meant to test is not implemented yet.
  3. 3. Write the simplest code that will make the test pass.
  4. 4. Run all the tests and verify all tests pass. If not, go back to step 3 and repeat until all tests pass.
  5. 5. Refactor the code: this is a fundamental step. When all tests pass, code can be tidied up, optimised or rearranged to make it easier to understand with the confidence that the changes made have not altered functionality within the same module. After refactoring, run all the tests again. Code should only be refactored when the tests pass.
  6. 6. Repeat process for each new feature.

The tests are fully automated, which ensures repeatability.

There are additional benefits to using this technique First, tests can be used as documentation for the code base. They provide developers with usage examples. They increase confidence when making a change to existing source code: running the tests is very quick and the developer will know in a matter of seconds if the changes he’s just made have broken other areas of the code. They make it possible to detect a bug early. The initial overhead of writing a test first will prove insignificant compared to the time it takes to hunt for a bug later on.

They also act as executable (and up to date) specifications.

Modules are tested in isolation from each other. To achieve this isolation, mocks are used to replace entities the code under test interacts with. This helps the developer implement a clean interface, and is especially useful for low-level code. This way, hardware can be mocked, which allows for easy and automated fault injection for instance. This makes it possible to automatically test for low level error handling, which would otherwise be very difficult.

We use code coverage tools as part of our software quality metrics. As a rule of thumb we aim at having 80% of the code exercised by unit tests.