The Need for Automated Acceptance Testing
This is the first article in a series of articles on the testing tool called FitNesse. This installment talks about the need for such a tool in an agile software development environment. Subsequent installments will dig into how to code support for various types of testing using FitNesse.
Many software development teams attempt various forms of iterative/incremental development (IID), with the intent to deliver software every couple of weeks. Teams that have their act together can be bolder, delivering every week or even every few hours.
The big question is, what exactly are they delivering and to whom?
Agile software development processes are built around the ability of the team to deliver working software to the customer—to a live production environment. You can deliver business value sooner. This can greatly benefit the customer, who now has the ability to steer product development by providing feedback to its producers.
The reality, however, is that most shops don't have the capability to deliver production-ready software every other week. Doing a full regression test against software takes most shops several days or even several weeks. So, even if you were able to construct quality code over an iterative cycle of two weeks, the software can take two weeks or longer before it's certified by a testing team as ready for production.
Doing two weeks' worth of development followed by two weeks' of testing might sound acceptable. But, it would only be effective if you knew the testers weren't going to encounter any problems in their two weeks of testing. (If you knew, maybe you wouldn't have to bother with testing!)
The problem is that the development team has moved on to the next iteration's coding work. If problems crop up, you have less-than-ideal choices. You can interrupt the current iteration, which is what usually happens. This disrupts the development team, detracting from their ability to complete work on the new set of features. It also kills your ability to use the iteration as a data point.
Or, you can schedule any required fixes into a subsequent iteration, further delaying release of the new feature. A feature slated for delivery in four weeks now won't be ready for at least eight more weeks (2 initial development weeks + 2 testing weeks + 2 weeks of rework + 2 weeks of final testing).
The result of all this, in either case, is that a development team delivers incomplete software to the testing team. That's most shops. But, there is a better way.
What you really want to do is verify that the software works as expected as soon it's complete. The problem is that you can't just verify the single feature. You must ensure that introduction of that new feature doesn't break anything in the rest of the system. In systems of any real size, the only way to do this in reasonable time is by automating as many tests as possible.
Note: I didn't say you must automate all tests.
Many tools exist to aid in the automation of software verification. High-end and high-cost tools such as Mercury's WinRunner and Quick Test Pro exist to meet these needs.
These powerful tools have limitations, however. They are designed for use by QA professionals only. Most of these tools require significant training before they can be used effectively. Still, it's possible to manage a complete suite of regression tests, so that you could deliver working, tested software every two weeks.
If testing were all there was to certifying software for delivery every two weeks, the high-end tools would be more than sufficient. The bigger problem is that the tests written using these tools aren't visible to anyone but the testers who build the tests.
One of our biggest problems in developing software is in translating end-user needs into specifications of what you should build. That translation is complicated with human communication problems such as ambiguity and misunderstanding. Often, you deliver software to meet what you think are the customer's requirements, only to hear, "That's not what I asked for!"
In the agile world, you center around tests as the common currency between teams. A "customer team," usually comprised of business analysts, quality assurance professionals, and subject matter experts, translates end-user need into a host of expressive functionality tests. These tests express the goals of the users—what they want to do with the system, and how the system should accomplish those goals. You can view these tests as a form of specification by example.
You call these tests acceptance tests (ATs) because they define the acceptance criteria: If the software passes all its acceptance tests, the customer accepts delivery of it.
Sound familiar? It should. ATs are very similar to the concept of use cases. Ivar Jacobson suggested that use cases can provide central traceability throughout development, from requirements gathering to production. If you've crafted quality use cases, you can reconcile all software development activities against them, including design, coding, testing, and even creation of the user manual.
The same applies to ATs, except that they go one step further: You know that the system does exactly what the tests specify because they are executable.
Acting as a cousin of readable, executable use cases is a great goal for ATs. Is it realistic? The high-end tools don't allow all parties involved to access and read the tests. They probably will someday, because most agile teams want that capability.
Incremental Development of a Testing Framework
Until then, it's possible to get there on your own. Imagine you were starting a new project in an IID mode. You plan the first iteration, targeting delivery of four small features. Your estimates for these features also include the time to develop automated tests for them.
You then code tests for the new features. For most environments, whether they be web applications, web services APIs, Java Swing user interfaces, or 3270 systems, there are ways to create tests in code to interact with these systems. If you've done your job of designing well, your user interface (UI) is extremely thin and contains no logic. In that case, your tests can directly interact with the rest of your system beneath, replacing that thin veneer of a UI.
As you code these tests, you start to eliminate duplicate concepts in the test code, and you clean it up for readability. You'll recognize the emergence of a testing framework. The next step is that you externalize testing steps into a simple scripting language that can be driven by a flat file. You design the scripting language so that it expresses simple end-user interactions with the system. Ultimately, you get to the point where business analysts might be the ones creating the scripts.
Some other concerns will creep up. For example, where do the test scripts reside? How does everyone get to them? How do you really create a scripting language that the analysts can understand? What do you have to teach people about flat files?
Most analysts understand spreadsheets. In fact, Microsoft Excel is the primary tool of many testers. What if YOU were to build a testing tool around Excel? I've helped a team do that by shaping our simple scripting language into something that could be managed by Excel macros.
Still, the problem of universal access exists. How does the product analyst know where to find the tests? What if two people want to make changes to them at the same time?
As you might imagine, agile development teams have built many tools to meet these needs. One such tool is FitNesse, an open-source effort that combines a framework for table-driven integration tests with the ease of use of a wiki.
Ward Cunningham developed Fit (Framework for Integrated Test), the base testing tool in FitNesse. Micah Martin of Object Mentor married Fit with a wiki to come up with the FitNesse tool. FitNesse is quickly becoming the standard for acceptance testing in the agile community.
In the next installment, I'll overview the design of FitNesse. I'll also demonstrate how use the tool to create and execute simple tests.
About the AuthorJeff Langr is a veteran software developer with a score and more years of experience. He's authored two books and dozens of published articles on software development, including Agile Java: Crafting Code With Test-Driven Development (Prentice Hall) in 2005. You can find out more about Jeff at his site, http://langrsoft.com, or you can contact him directly at email@example.com.