Build times matter. Agile development and other pragmatic methodologies rely heavily on the practices of continuous integration, automated testing, and frequent builds. As build times grow, it becomes increasingly difficult to maintain the frequent use of these practices.
Automated builds must meet the restrictions imposed by natural psychological limits in order to serve their purpose. A build that which is intended to be run during test-driven development cycles must execute quickly so it won’t interrupt the flow of development. A build supporting a pre-check-in smoke test that hangs mercilessly for ten minutes will discourage the exact practice it is intended to support. Build systems must be designed to meet their purpose.
Enterprise build systems not only package an application, but also provide significant feedback regarding the quality of the build. Successful compilation is the most basic form of feedback. Unit, integration, and functional tests provide additional layers. Reports, metrics, and statistics also can be generated and provide even more useful information. Although each of these types of feedback can be beneficial, the interruption of focus that they require may be detrimental to a project.
By simply observing developers in action, it is easy to determine a team’s natural limits. As a general rule, a build that must be run at least every ten minutes should take no longer than 15-20 seconds to complete. If it does, the developer either will become distracted from the task at hand or stop running it as frequently as suggested. Similarly, a check-in build that takes longer than 30 seconds either will discourage frequent check-ins or the execution of the build prior to check-in. Both can be detrimental to the team. On the flip side, a nightly build that fails to perform the full suite of unit, integration, and functional tests will be ignored. It simply does not provide enough feedback to warrant attention.
Interestingly, other factors in addition to execution time also play a role in determining how long a developer is willing to focus on an executing build. The feedback provided during the build process plays a major role. A silent build that reports little to no information until the process has completed is very difficult to sit through; however, a build that continuously reports its status, highlighting the tasks that have successfully been completed, or reporting issues as it goes can help extend the limits that would otherwise be imposed.
Designing a build system and optimizing builds can be daunting tasks if they are undertaken as a single task. Learn to develop a build system iteratively; refactor it to meet psychological limits as needed. As an example, consider for a moment your developer build. If your goal is to have it execute in less than 15 seconds, it’s safe to assume that during the initial stages of a project it can consistently clean, compile, and run all of the unit tests within the allotted time. As the codebase grows, it may be necessary to refactor the build and postpone the feedback associated with cleaning the previously compiled until you run uour check-in build.
Understanding the time consuming components of a build is also a critical part of optimization. Copying files, executing tests that connect to a database, and running reports are some of the most common culprits for long running builds. Although each of these have their place within a build system, it’s critical that they are performed only when needed.
Automated builds are an essential for highly productive teams. To be effective, teams must learn to recognize psychological barriers and optimize the build to meet them. Whether you are using Maven, Ant, or some other tool to automate builds, the psychology of builds should be taken into account.
About the Author
David DeWolf is the chief technical architect and founder of Three Pillar Software, Inc. He works with mid-sized and Fortune 1000 companies to establish corporate standards that promote best practices and agile development. David has over eight years of commercial software development experience and is a member of the Apache Software Foundation’s Struts, Tiles, and Portals projects. David actively participates in the Java Community Process as a member of the Java Portlet Specification Expert Group and is the author of various online publications.