Charles' Six Rules of Unit Testing

by Charles Miller on July 25, 2002

  1. Write the test first
  2. Never write a test that succeeds the first time
  3. Start with the null case, or something that doesn't work
  4. Don't be afraid of doing something trivial to make the test work
  5. Loose coupling and testability go hand in hand
  6. Use mock objects

Write the test first

This is the Extreme Programming maxim, and my experience is that it works. First you write the test, and enough application code that the test will compile (but no more!). Then you run the test to prove it fails (see point two, below). Then you write just enough code that the test is successful (see point four, below). Then you write another test.

The benefits of this approach come from the way it makes you approach the code you are writing. Every bit of your code becomes goal-oriented. Why am I writing this line of code? I'm writing it so that this test runs. What do I have to do to make the test run? I have to write this line of code. You are always writing something that pushes your program towards being fully functional.

In addition, writing the test first means that you have to decide how to make your code testable before you start coding it. Because you can't write anything before you've got a test to cover it, you don't write any code that isn't testable.

Never write a test that succeeds the first time

After you've written your test, run it immediately. It should fail. The essence of science is falsifiability. Writing a test that works first time proves nothing. It is not the green bar of success that proves your test, it is the process of the red bar turning green. Whenever I write a test that runs correctly the first time, I am suspicious of it. No code works right the first time.

Start with the null case, or something that doesn't work

Where to start is often a stumbling point. When you're thinking of the first test to run on a method, pick something simple and trivial. Is there a circumstance in which the method should return null, or an empty collection, or an empty array? Test that case first. Is your method looking up something in a database? Then test what happens if you look for something that isn't there.

Often, these are the simplest tests to write, and they give you a good starting-point from which to launch into more complex interactions. They get you off the mark.

Don't be afraid of doing something trivial to make the test work

So you've followed the advice in point 3, and written the following test:

public void testFindUsersByEmailNoMatch() {
       assertEquals(
            "nothing returned", 
            0,
            new UserRegistry().findUsersByEmail("not@in.database").length);
}

The obvious, smallest amount of code required to make this test run is:

public User[] findUsersByEmail(String address) {
      return new User[0];
}

The natural reaction to writing code like that just to get the test to run is "But that's cheating!". It's not cheating, because almost always, writing code that looks for a user and sees he isn't there will be a waste of time - it'll be a natural extension of the code you write when you actively start looking for users.

What you're really doing is proving that the test works, by adding the simple code and changing the test from failure to success. Later, when you write testFindUsersByEmailOneMatch and testFindUsersByEmailMultipleMatches, the test will keep an eye on you and make sure that you don't change your behaviour in the trivial cases - make sure you don't suddenly start throwing an exception instead, or return null.

Together, points 3 and 4 combine to provide you with a bedrock of tests that make sure you don't forget the trivial cases when you start dealing with the non-trivial ones.

Loose coupling and testability go hand in hand

When you're testing a method, you want the test to only be testing that method. You don't want things to build up, or you'll be left with a maintenance nightmare. For example, if you have a database-backed application then you have a set of unit tests that make sure your database-access layer works. So you move up a layer and start testing the code that talks to the access layer. You want to be able to control what the database layer is producing. You may want to simulate a database failure.

So it's best to write your application in self-contained, loosely coupled components, and have your tests be able to generate dummy components (see mock objects below) in order to tests the way each component talks to each other. This also allows you to write one part of the application and test it thoroughly, even when other parts that the component you are writing will depend on don't exist.

Divide your application into components. Represent each component to the rest of the application as an interface, and limit the extent of that interface as much as possible. When one component needs to send information to another, consider implementing it as an EventListener-like publish/subscribe relationship. You'll find all these things make testing easier and not-so-coincidentally lead to more maintainable code.

Use mock objects

A mock object is an object that pretends to be a particular type, but is really just a sink, recording the methods that have been called on it. One implementation of mock objects I wrote in Java using the java.lang.reflect.Proxy class can be found here, but I'm sure there are more capable implementations elsewhere.

A mock object gives you more power when testing isolated components, because it gives you a clear view of what one component does to another when they interact. You can clearly see that yes, the component you're testing called "removeUser" on the user registry component, and passed in an argument of "cmiller", without ever having to use a real user registry component.

One useful application of mock objects comes when testing session EJBs, without the hassle of going through the EJB container to do it. Here's a test class that checks a session EJB correctly rolls back the containing transaction when something goes wrong. Notice also how I'm passing a factory into the EJB - this is something that happens quite often when you want to be able to alternate implementations between test time and deployment.

import org.pastiche.devilfish.test.*;
import junit.framework.*;
import java.lang.reflect.*;
import javax.ejb.SessionContext;

public class MonkeyPlaceTest extends TestCase {
      private static class FailingMonkeyFactoryHandler 
             implements InvocationHandler {

         public Object invoke(Object proxy, Method method, Object[] args)
             throws Exception {
             if (method.getName().equals("createMonkey")) 
                  throw new MonkeyFailureException("Could not create");
           
             throw new UnsupportedOperationException("Didn't expect that");
         }
      }

      public void testFailedCreate() throws Exception {
         MockObjectBuilder factoryBuilder = MockObjectBuilder.getInstance(
             MonkeyFactory.class,
             new FailingMonkeyFactoryHandler());

         MockObjectBuilder contextBuilder = MockObjectBuilder.getInstance(
             SessionContext.class);
         MonkeyPlaceBean bean = new MonkeyPlaceBean();
         bean.setSessionContext(
             (SessionContext)contextBuilder.getObject());
         try {
             bean.newMonkey(
                (MonkeyFactory)factoryBuilder.getObject(),
                "fred",
                "bananas");
             fail("Expected monkey failure exception");
         } catch (MonkeyFailureException e) {
             assertEquals("session was rolled back",
                 new MethodCall("setRollbackOnly"),
                 contextBuilder.getHandler().getMethodCall(0).getName();
         }
      } 
}

Previously: Thu, 25, Jul 2002 08:56:00 AM

Next: Thu, 25 Jul 2002 09:51:08 GMT