Solid Test Driven Development


The motivation behind introducing automated testing on client side can be twofold. Either inspiration or desperation. Suppose you are in a team thinking about the architecture of a new software solution. The task “from zero to hero” is often motivating as you don’t have to deal with mistakes made in the past. Inspired by the thought that the team is going to do everything better, faster, more efficiently than ever before, mostly results in using a testing framework and executing automated tests. As we transition from greenfield to brownfield development, the cause shifts more and more towards desperation. When mistakes made by the software often have to be corrected on database level, and companies start losing significant amount of money due to a careless development process, a structured approach incorporating execution of automated tests becomes inevitable. The good news is that automated testing, when done right, can do miracles. The bad news is that in the hands of inexperienced teams, this tool is good for nothing else but self-deception. We will discover below why the necessary condition for avoiding self-deception is to write tests without the prejudice of the implementation. One way to achieve this is Test Driven Development (TDD).

Types of automated tests

In order to easily understand the rest of this article, I will introduce the three types of automated tests we will focus on in most of my blog posts. I could come up with proper definitions, but you would not likely to remember it right away. I liked the approach of the book Growing Object-Oriented Software, Guided by Tests: the author illustrated the role of the different test types with questions. These are:

  • Acceptance tests: does the whole system work?
  • Integration tests: does our code work against code we can’t change?
  • Unit tests: do our objects do the right thing? are they convenient to work with?

Unit tests are fully technical. They focus on one specific object at a time, simplifying their surroundings with hard coded behavior. Integration tests happen on multiple levels: you can test how your objects cooperate in solving a problem, and you can also test how different systems communicate. Acceptance testing goes end to end: it exercises the whole system, from user interactions to the database. The higher the level, the less technical the tests become. While most users would never understand your unit tests, the acceptance criteria should be clear to everyone, including users, customer service or even business people.

Note that there are other types of tests that we won’t focus on. Some examples are: performance and stress tests checking how your system behaves against expected and peak load, and user tests checking how the feedback how the user uses the system. Although user testing requires human interactions, it can still be done in a semi-automated way.

Why do we need automated tests?

Let’s continue with the reason why automated tests are important. I can recall the anecdote of my software engineering professor: back in the “good old days”, a military reader had about 1000 pages of test documentation that had to be executed before every deployment. It is needless to say that costs of executing these tests were huge. Once the government ordered the department to cut the testing budget by half. The response of the leader of the test team was: which half of the test documentation should I throw away, the first half or the second half? As funny as it sounds, there are companies out there even today that have a long documentation of what manual scenarios have to be executed by hand before giving the green light to deploy software. The amount of time and resources wasted on manually checking obvious things is comparable to introducing unit, integration and acceptance testing and moving towards continuous integration. It also requires a lot of discipline. Imagine that a developer addresses a hotfix ticket in a matter of 5 minutes. The fix seems to be working perfectly. The developer may even convince the acceptance testers that it is needless to execute the long procedure of manual tests, it is just a small local fix. The fix goes to live. All of a sudden, something goes terribly wrong on another view of the application, using the same object that was modified. If this has never happened in your team, can you at least recall hearing about similar situations?

Having an in-house QA team is often the norm, but it is not mandatory. Some companies do really well by shifting responsibilities to the developers. In these companies, a developer is not ready as soon as coding is done. The solution is ready when it is stable enough to be deployed. However, when developers fail, a very well established software development process should chatch their mistake before deployment, otherwise firing the QA team is just another way to burn money.

When manual tests take a lot of time and discipline to execute, automated tests come to the rescue. You may not even need automated tests. After all, the size of your project matters. A sales page, a tax calculator, an flight booking software, your banking software or Nasa both have different quality standards. The tax calculator I used got broken when it was updated with the new tax laws from 2015. Who cares? I either use another one or wait for a week or two. My expectations were completely different when using a well known online marketplace and I happened to purchase the same bundle of items twice because I retried the purchase after an error page popped up that the feature is not working. Most of the time, when money is involved, things become more serious.

How does Test Driven Development help you?

Test Driven Development can be summarized in three words forming a loop: red, green, clean. First you write a failing test (red). Then you write just enough code to make this failing test pass (green). Then you go back to the code base and check if your code or your tests need to be refactored. This is how you achieve clean code in a stable, controlled way. When you make a mistake while refactoring, the safety net of your existing tests often catches you. After all, these tests describe your full knowledge of what your code should already be capable of. Repeat this red-green-clean process until the specification is fully described by your passing tests and therefore your code fully acts according to the specification.

When you first implement your feature and then you add some tests to cover it, you are doing self-deception. Imagine there is a serious error in your thought process. This error can be an edge case and it can be hard to discover this error. After you are done with the implementation, you are certain that your code is working and writing tests are often just a formality. If you ever end up writing your tests after implementation, your prejudices about your code and your understanding of the problem are taken with you. This can result in partial blindness. Therefore testing your own code is not a good idea, you are involved in it too much to be able to see it from a distance.

TDD is an option to get rid of your prejudices. It is not the only one. Your team can also split responsibilities into two: writing the code and writing the tests based on the specification. These can be done in parallel. The key point is that tests should be written based on the specification. The benefit of using the TDD approach is that while writing your tests, you actually think about interfaces and design the internal state space and responsibilities of the state space. All in all, writing tests first helps you write better code, while writing code first makes you write worse tests.

Common problems and Solutions

I have heard or read frightening opinions such as:

  • “We have 5 failing tests, that’s normal!”
  • “We don’t test before coding because we don’t know where to start”
  • “Yes, we do TDD because we write tests”
  • “We prefer BDD to TDD”.
    “What do you mean? You write tests after implementation.”
    “Yes, but we use the BDD syntax of Mocha.”

Let’s address these points one by one.

First of all, if you start writing automated tests, never accept a failing test. All testing frameworks should provide you with the ability to put incomplete tests in a pending state. A test that fails is incomplete. Tests that fail often enough are also disturbing. If you have a thousand tests and fifty of them fail with a 1% probability, you will find at least one test failing 4 out of 10 times. Even though there are false positives and false negatives, they should occur as often as the chance of winning the lottery.

Test driven development is not scary if you know where to start. At this stage, it is enough to note that starting from the point of view of the acceptance criteria is a good start. Your acceptance criteria become acceptance tests. If an acceptance test is too complex, break it down into smaller tests. Once all your lower level tests pass, your complex test will eventually pass too. If you have trouble formulating acceptance criteria, I suggest studying Behavior Driven Development and researching about Specification by Example. Even if you don’t decide on following principles of Behavior Driven Development in the strict sense, I assure you that you will at least learn how to formulate solid acceptance criteria.

The last two statements will be addressed together. TDD does not mean that you write automated tests using an automated testing framework. It has not much to do with the syntax you use either. Development is test driven when you first write a failing test, then you write just enough code to make it pass, then you refactor both your code and potentially your tests as well to increase their quality. As soon as you fully describe the feature you want to implement, you are done. If any of the above tests are omitted or their order is changed, we are not talking about TDD anymore.

Write solid tests

Hopefully it is clear that writing good tests and sticking to the principles of TDD come with different benefits. If you take refactoring serious during the TDD process, you will end up with better tests, but the readability, maintainability and focus of your actual unit tests is a skill that you have to master outside the scope of TDD. Similarly, TDD is not the ultimate answer that solves your software design problems, it just helps you point out shortcomings of your code. For instance, when you see that your test is hard to write, it may indicate that your code may not be testable enough, signalling the need for a refactor.

The best advice I can give you when formulating your tests is to shift to a mindset that you are writing your tests to non-technical stakeholders. In fact, this advice becomes mandatory when you cooperate with less technical team members such as your project manager, your UX team or a client. There is no better indication of progress than your already passing acceptance tests compared to the pending tests you still have to complete. Even future users should be able to read your test documentation and know what they are about. If your tests are hierarchical, not all levels of the system will be made fully understandable, but still do your best with your acceptance tests. Integration and unit tests may become more technical as the first will likely talk about interfaces and communication protocols, while the unit tests will eventually contain at least some elements of your detailed design such as models and views in case of an MVC application for example.

The above advice can be supported by three good practices that should be followed:

  • Focus on exactly one problem at a time. A single test should not exercise multiple features. Focus give you clarity through concentration of power. Put all your efforts into the description of one condition that brings you forward in your goals. As a side benefit, if one of your focused tests will fail some time in the future, you will exactly know where the problem lies. Try to determine all causes why a test describing 3-4 conditions fails.
  • Too much setup, tear down (clean up after a test) or too much exception handling in the tests can shift your focus away from the real problem. If you end up doing too much of these activities compared to testing these features, your code is either not testable enough or you could simplify the code under test by using stubs or mocks to replace peers of your objects under test. Mocking is outside the scope of this article.
  • Avoid magic numbers at all costs. Even when setting up test data, know why and how you exactly specify your data. Fully exercise the state space by covering either each state or each transition that can occur. When dealing with numeric ranges, make sure you exercise boundary values. For instance, if the lower limit of your bank transfer is $20 and the upper limit is $1000, make sure you test transfers of amounts $19.99, $20, $1000, $1000.01.

Once I ended up verifying an automated test suite and found the following: a function generated JSON data substituting the role of the API. The JSON generator had exactly one argument: the ID of the object. Then I started reading the generated attributes and found that status was id % 4 and isDeletable was id % 10 === 3. The main problem is that in order to know how to generate test data, that suits your needs, you have to reverse engineer the whole function to be able to enter values that you actually want to be tested. As a side note, in this specific example, it was impossible to generate a deletable business object with a status of 0 or 2 as no even numbers end with 3.

Summary

Both writing automated tests and applying the principles of Test Driven Development can be a gift when applied properly. You will eventually get rid of the burden of having to manually test functionality that is easy enough to automate. As a side benefit, developers will be encouraged to proceed with their detailed design, especially when refactoring their code and tests. We also saw that the red-green-clean loop frees the development team from the burden of testing under the prejudice of their thought process while implementing the tested feature. Four excuses and wrong beliefs were addressed about automated testing and test driven development. If you have more, feel free to get in touch with me. I have heard it so often that TDD is not good for us because of whatever reason. In the end, TDD was not only beneficial to them, but even mandatory as their team size grew. As a balancing act, I gave you some guidelines on how to write solid tests: your tests should be related to the specification, hiding implementation details at least on top level. In addition, you will reach the next level in test quality, if your tests fix exactly one feature, you use mocks and testable code to limit setup, teardown and exception handling, and the values you specify in the test cases make perfect sense even to your client.

Learn ES6 in Practice

Sign up below to access an ES6 course with many exercises and reference solutions.