Antoine Kalmbach

Programming / Practices /

The overhead of test-driven development

There are significant merits in test-driven development (TDD), but I think its costs outweigh its benefits. The idea in short is that

  • you write all your software requirements as test cases,
  • you write the test cases first so that they fail,
  • you write the program until all test cases pass,
  • and your program is complete.

Note that the test cases do not necessarily need to be automated, nor do they need to be unit tests, as long as there’s a test–develop–test–develop cycle.

Iterate, iterate, iterate

This can lead to a very short and sweet development cycle, because as soon as all the tests pass, your program is ready–supposedly. But the devil is in the details, because as soon as you lay down those four items, you start asking questions like:

  • How is a requirement properly captured as a test case?
  • What is the level of granularity for your tests?
  • How much work is needed to create working tests?

To answer the first question, you need to properly understand the requirement. You need to be testing the right thing, at the right level. For instance, if a requirement of a library management system says that “the same user cannot make more than one reservation on a single item”, how do you test this? Do you register a user, create an item in the library catalog, make one reservation, then try making another, expecting a rejection? Initially we’d like our fail the test case by not rejecting the double reservation, once the second reservation is rejected the test is considered a success.

The design of the test case itself is quite complex. For the test to be able to work, we now need to have functionality for

  • registering or loading a preset user,
  • loading a library catalog (or adding new items to it), and
  • a mechanism for making reservations, and checking whether they succeed or not.

So obviously the first step is not to get the test to fail but to work: our test program needs to get to the point it has to signal failures or passes. This is where we get to the second question: what do we use to build the test scenario? Do we just write a program that calls some functions in the reservation system, or we do user interface tests where another program interacts with the program? Choosing the testing level is not easy! User interface tests require complex machinery to set up, but they are valuable, because you can treat the underlying implementation as a black box. This is valuable, because this way you cannot make tight coupling between the implementation and the interface. Only the results matter.

First, you must create the universe…

The last question is the most interesting one: we need to be able to create tests that signal whether they have failed or not. For them to be useful, you need the tests to actually interact with the program somehow. This could mean that the tests call the functions you are testing. To get that to work, you need to define function stubs so that the compiler or interpreter you are working with does not reject your code.

For instance, consider this Java code:

@Test
public void testReservation() {
    Reservation reservation = TestData.getReservation();
    
    // some global reservationSystem defined outside this method
    bool success = reservationSystem.makeReservation(reservation);
    bool failure = reservationSystem.makeReservation(reservation);
    Assert.assertFalse(failure);
}

Rather obviously for the Java compiler to accept this we need to have the makeReservation method on the reservationSystem! In the ReservationSystem class we define

// in ReservationSystem.java
public bool makeReservation(Reservation reservation) {
    return false;
}

Not to mention that you’d have to define the Reservation class, and so on.

I think test-driven development can be quite beneficial when you are refactoring your code or adding or modifying functionality. That is, in the above example, suppose we already have a working reservation system and all the necessary scaffolding in place. Implementing the double-reservation prevention in a TDD fashion is quite simple: just add the test above, and fix your code.

But what about when you don’t have a program to modify, or you are writing a suite of test cases for a part that isn’t there? Suppose we don’t yet have a reservation system at all, and this test case is a part of a number of many test cases each and everyone of which needs some kind of functionality in order to compile to get to the failing state?

Futhermore, if you’re starting from scratch, it’s obvious that you need to spend considerable effort on writing the stubs first, and this indirectly guides your development process, and the way you design the program. Here’s where get to the big question: does test-driven development guide your program design in a way that’s appropriate?

The answer is it depends. To be precise, it depends on the way you approach program design. For those who don’t mind writing the stubs and test scaffolding first it can be a hugely productive manner of working, but to those who like to think in small iterations with functionality first, it feels counterintuitive.

When building a bridge it makes a lot of sense to specify up front that it should be robust enough to tolerate the weight of a hundred cows when complete. On the other hand, it would be rather stupid to continuously drive cows on the bridge even before construction has started. The cows would fall into the water and everyone, especially the cows, would find the experience quite displeasing.

Although software is not physical, time is still time and just like the bridge testers would have spend time driving cows into water (initially) and have to rescue them from the water, in the beginning of a TDD process one needs to fight the compiler so that you get the tests to work.

So to me doing the initial grunt work of setting up stubs for test-driven development feels like throwing cows into the water. I like to start with the requirements, figure out a minimum viable design that’s testable, and then write the tests, and repeat until the program satisfies the requirement and the tests are rigorous enough.

But when refactoring programs or adding or modifying functionality I sometimes practice TDD but not religiously. It largely depends on what I’m working on and if the feedback loop between change-test-change-test is fast.