GuidesReal Developers Demand Application Quality Measures

Real Developers Demand Application Quality Measures

Developer.com content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Dr. Bill Curtis is the Senior Vice President & Chief Scientist of CAST Software

The Misperception that Developers Dislike Quality Data

For decades, IT executives wanted to use software quality data as a basis for evaluating the performance of their developers. When implemented, this ‘worst practice’ led to anger, mistrust, and untended side effects. Clever developers learned how to maximize their numbers at the expense of other important outcomes. Because developers are proud of their professional work, because many managers are not current in development technology, and because software is a nascent engineering discipline, developers react quickly when they sense the misuse or misinterpretation of quality data. These reactions cause many to believe that developers abhor measures on the quality of their work.

Not true! When used for improvement and self-management rather than performance evaluation, developers have embraced quality data with profound benefits. Many developers use static code analyzers on their components as part of their unit testing. Watts Humphrey has demonstrated through use of the Personal Software Process that developers can use their own data to make extraordinary improvements in their personal performance. The key to using quality data effectively is in understanding why developers need such data and how it can empower them.

Why Application Quality Data Is More Important Than Ever to Developers

Application developers have always sought insight into the architectural and engineering attributes of their applications. These attributes can be measured and displayed in a form that provides developers with profound knowledge about the maintainability, robustness, security, usability, performance, and other aspects of their applications. These measures have been available since the late 1970s, but only recently have emerging trends motivated their adoption in leading IT organizations.

Over the last decade, four dramatic transformations in application development have elevated the importance of measures characterizing the internal quality of software. Unfortunately, these trends have made such data far more difficult to collect manually.

  • First, applications are no longer monolithic heaps of code written in a single language. Rather, they are developed in multiple tiers, each with a different system function, performed by a different technology, written in a different language. Consequently developers can no longer be expert in all the technologies being integrated into an application. They need insight into how their components will interact with technologies in other tiers.
  • Second, applications are now developed by multiple teams often on multiple continents who may be employed by different companies. In highly distributed environments, informal channels of communication are no longer adequate for exchanging critical technical information between teams. Tacit knowledge about the application must be supplemented with current and objective facts about the application and its attributes.
  • Third, many of an application’s most critical weaknesses are difficult to detect through testing. Testing typically focuses on functional defects. The most devastating defects in operation are frequently non-functional defects for which test cases are difficult to devise. Even stress and load testing often fail to adequately simulate operational conditions, leaving critical problems undetected. Development teams need much more comprehensive information about the quality and integrity of an application’s architecture, coding, and component interactions across technologies and tiers.
  • Fourth, developers are beginning to see their role from a different perspective. They are moving beyond the antiquated role of an isolated craftsman, to embrace the more encompassing role of providing a service to the business. This transformation has been accelerated by the agile movement. To ensure they are providing the best service possible, developers need current and objective information on the architecture and quality attributes of the application that delivers the service.

As IT applications perform an increasing proportion of an enterprise’s critical business functions, as their development becomes more complex, and as developers accept greater responsibility for how well their software serves the business, developers become more receptive, and often even eager for greater insight into problems that degrade the effectiveness of their applications.

How Does Data on Application Quality Benefit Developers?


Software developers experience five primary benefits when they have access to data on the internal software quality of their applications.



  1. Better diagnostics — Objective quantitative and qualitative feedback on the internal quality of an application can help developers pinpoint specific weaknesses in the code that must be fixed in order to avoid problems such as outages, degraded performance, data corruption, and security breaches. Among the greatest causes of such operational problems are non-functional weaknesses in the code that frequently avoid detection during test. Many of these weaknesses result from harmful interactions among different technologies whose consequences were difficult to understand or detect during development and test. Consequently, assisting developers with comprehensive automated analysis of the interactions across an entire application, rather than only on their own components, empowers them to strengthen the overall integrity and dependability of a complex application. Eliminating these problems helps developers ensure they are delivering the best possible service to the business.
  2. Less rework — Industry data on rework in IT are staggering. Between 30% and 50% of all the effort spent on application development in most organizations is spent fixing problems—rework! Since the cost of fixing a defect typically increases tenfold across each major phase of development, the earlier a defect can be detected, the less rework a developer will perform. When developers can use comprehensive data on the internal quality of their applications to eliminate problems before they are placed in operation, not only does the cost of ownership decrease, but the damaging losses from outages, security breaches, and the like are reduced as well. World class development organizations reduce rework to less than 8% of their overall development costs.
  3. Greater productivity — Studies have repeatedly shown that 50% of the time devoted to maintaining and enhancing existing applications is spent trying to figure out what is going on in the code. Thus, only half a developer’s effort is devoted to designing, implementing and testing new functionality. Dramatic improvements in the quality of an application’s architecture and code can significantly reduce the amount of time developers need to understand the inner workings of an application, accelerating them to the productive part of their work. Much of the difficulty in understanding a modern application comes from the complex interactions among different tiers and technologies. When developers can refactor their code to improve its internal quality, they not only deliver new functionality much quicker to the business, but they inject fewer new defects, resulting in better service to the business and lower application ownership costs for IT.
  4. Faster learning — When developers receive comprehensive information on internal quality of their work products, they begin learning about problems hidden in the interactions among tiers and technologies in the application. In addition to helping them eliminate current weaknesses in the code, this knowledge helps them avoid such weaknesses in the future. Consequently developers are able to deliver more functionality, faster, and with fewer unintended side effects. The fastest most effective learning occurs with immediate feedback on actual work, and comprehensive feedback on the quality of their work affords an extraordinary opportunity for learning and professional development. In fact, the most frequent weaknesses identified in these analyses can be used to guide training at the beginning of each release cycle.
  5. Tighter teamwork — Objective data on internal software quality can dramatically strengthen the coordination within a single development team, and across distributed teams. Having a common terminology for discussing architectural issues and weaknesses improves the communication within and across teams, and provides a foundation for strong coordination. Driving team meetings from objective data helps eliminate subjective arguments and focuses the team on selecting and prioritizing the most important weaknesses to remediate. Comprehensive quality data can empower development teams to take greater responsibility for managing the overall quality of their applications.

How Should Internal Quality Data Be Presented to Developers?


Information on internal software quality will only be used effectively if it is integrated into the development process. Developers already collect a limited set of this information through statically analyzing their own code during unit test. However, when collected at the application level such data can be incorporated into the results reported from each build or release to prioritize and schedule remedial actions. They can be used by developers to create checklists of problems to hunt for during design inspections or code reviews. They can be used ensure training is meeting the actual needs of developers. Finally they can be used by development teams to raise awareness of potential problems as they enter new development phases or iterations. The common theme underlying these uses is continual improvement in both the business value of the application and the professional capability its developers. Comprehensive quality data empowers developers to take charge of the quality of their professional work and the service it delivers.


About the Author







Dr. Bill Curtis is the Senior Vice President and Chief Scientist of CAST. He is best known for leading development of the Capability Maturity Model (CMM), the global standard for evaluating the capability of software development organizations.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories