Doing TDD Well
Test-driven development (TDD) is a simple concept. The TDD cycle is: Write a little bit of test, ensure it fails, write just enough code, ensure all tests pass, clean up any new deficiencies in code quality, and finally, ensure all tests still pass. Repeat. There are few rules beyond this definition of how to apply TDD, only nuances such as:
- Don't check in code unless all tests are passing.
- Test everything.
The latter rule is simply a restatement of the technique.
Learning to do TDD necessitates learning how to build mocks or stubs, techniques that involve faking out collaborator code to enable easy unit testing**. Although mocking isn't a day-one technique, neither is it an advanced concept. Mocking is a workhorse for many programmers, a tactic that is an essential part of learning how to test effectively. Mocking concepts are taught as part of any comprehensive introductory training on TDD.
I'm often asked for training on advanced TDD. "What do you consider advanced TDD?" I ask. Typical answers are, "how do I test databases," "how do I test GUIs," "how do I test multithreaded code," and so on.
Yet, these aren't advanced TDD challenges. Solving these challenges involves applying fundamental TDD concepts. The basic TDD technique itself promotes decomposing larger problems, such as how to test database interaction into smaller chunks. Why? Because they are easier to test-drive! Interacting with databases can be broken into smaller problems, such as:
- Generating proper SQL strings
- Obtaining database connections
- Loading configuration information
- Executing SQL statements
- Populating objects from query results
Each of these smaller concerns can easily be unit tested by a developer with a knowledge of basic TDD concepts. Note that some level of integration testing is still necessary in addition to the unit units. Integration tests ensure that all these database "subsystems" are tied together correctly. They also ensure that the application's understanding of the database is still in synch with its actual definition.
The decomposition concept applies equally well to other complex challenges. Any TDD programmer worth his or her salt can tackle any of these challenges with a bit of tenacity. Ultimately, I don't believe there is such a thing as "advanced TDD."
No Advanced TDD?
TDD is a simple technique. It's not mindless, however, and is something that most developers continue to refine the more they practice it. Learning to do TDD well is much like learning to play the piano. Any beginner is capable of hitting the same notes that the expert hits. Yet the difference in the efficiency of pressing the keys, as well as in the quality of the output, is marked.
Developers who commit to doing TDD become masters by treating it as a discipline. The secrets of becoming a master often boil down to applying simple, common-sense notions. The difference between a master and a journeyman is that a master constantly seeks out and constantly applies these small but valuable secrets.
The remainder of this article contains my collection of tips, elements that I believe have helped me take my practice of TDD to a more advanced level. I'm not sure I've mastered it yet!
When I do TDD, I keep in mind three primary ideals: the notion of specification by example, the relationship between testability and design, and incrementalism. I recently wrote about the relationship between testability and design. The interest in specification by example is significant: tests are most sustainable when they not only verify code but also act as highly readable documents that describe class capabilities. Finally, TDD's idea of incremental growth of a quality system is integral to agile's notion of consistently delivering new functionality every week or two. This desire for incrementalism can distill all the way down to the most basic code movements and additions.
I also adhere to one overriding ideal, that of simplicity. TDD is a simple but rich technique. In my experience, efforts to introduce advanced techniques and complexity into a TDD-based process have provided only little value for significant cost.
Moving from beginner to master in any arena requires practice. Expert pianists have invested years of effort and diligent practice. Software development is no different. To grow in skills and capabilities, developers must practice their craft.
Unfortunately, most developers practice only "on the job." They figure that it's enough to build things once. Perhaps they are right—every experience can bring improvement and progress toward becoming a master. But, the best programmers understand that there's always a better way to do things. Each re-iteration of a solution is a step towards insights that might not come otherwise.
I highly recommend finding a personal problem, something small enough to complete in at most a few hours, but not so trivial that there's room for only small improvement. Ron Jeffries has rebuilt the bowling game many times. I've built components of the "interacting with databases" problem many times. Each time, I've improved upon my solution and its design, upon my technique, and upon my effectiveness with the tools I use.
Two heads are better than one. There's no better way to learn a new technique, and improve upon an existing one, than to work through it with, and get feedback from, a capable partner. Make sure that you switch partners often enough to get more than one viewpoint.
If tests are to act as specifications by example, they must be readable to someone other than their author. Pairing is a great technique to ensure that at least one other person understands a test. A disengaged third party, however, is a great resource: Ask someone to quickly read each new test, paraphrasing it. If the third party understands it, ship it! Otherwise, clean it up and try again.
Many developers think they're smart enough to know what need not be tested. Some developers will make excuses for what they deem trivial; others will give up on tougher testing challenges. At least two significant downsides of these decisions exist: The untested code will not be adequately documented, and it will not support high-confidence refactoring. Each of these downsides can create significant long-term costs.
Not all tests must be unit tests. You should use integration tests if necessary to fill holes in the unit tests. Minimize these, however, because they generally are more volatile and costly to maintain.
** Some rare few developers have the luxury of building applications that don't talk to databases, external APIs, or other such slow/volatile collaborators. These developers can proudly say they don't have to use mocks.