Architecture & DesignSecuring the Software Development Process

Securing the Software Development Process

Developer.com content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

It has become well established that maintaining the security of information systems hinges on more than just placing a firewall in front of a system and calling it a day. Security needs to be approached holistically, and that includes applications being developed with security in mind. The analyst firm Gartner has been attributed with saying that over 70% of security vulnerabilities exist at the application layer and not the network layer and this is not a new revelation. A SANS report going back to 2009 demonstrated similar findings whereby they reported that application vulnerabilities now exceed OS and network vulnerabilities. Yet, despite these “well known” problems with application security, we are still plagued with insecure applications because security is often not a primary focus of many software development teams and is instead treated as something that can be bolted on later. Security needs to be considered a critical component of any software project from day 1 and this article will discuss various ways that security can be incorporated into all aspects of the software development lifecycle.

The Software Development Lifecycle

For simplicity purposes, this article will assume that the software development process being followed is the Waterfall Model with stages divided into Requirements, Design, Implementation, Testing, and Release/Maintenance. The linear nature and the distinctiveness of the stages under the Waterfall Model will allow us to better focus on the security aspects of software development projects, but everything being discussed can be readily adapted to the various Agile methodologies preferred by many organizations.

Secure1
Figure 1: The stages of the Waterfall Model

The requirements stage deals with specifying a description of the software to be developed and its needed functionalities and is intended to be an agreement between the customers and the developers of the software as to exactly what needs to be accomplished. The best way to ensure that security is considered a priority in any software development project is to make it an explicit requirement. This raises of the question of what security requirements are needed and how these requirements should be decided upon. Because each application is going to be somewhat unique in its functionalities and deployment environments, the best way to approach this is to do some basic threat modeling and risk assessment. Even though there is more than one way to perform such assessments, one recommended way is the STRIDE and DREAD approach. STRIDE is a threat modeling methodology that makes programmers think like an attacker to identify potential ways in which their application could be abused. They try to identify potential attack vectors that fall under the classifications of:

  • Spoofing
  • Tampering
  • Repudiation
  • Information disclosure
  • Denial of service
  • Elevation of privilege

For example, an application that makes use of a database may be subject to Information disclosure if an SQL injection attack is successfully launched. The goal of the threat modeling (STRIDE) portion of the exercise is to step through these categories and identify the possible threats the application may be exposed to in each category. Once the multitude of threats are identified, the risk analysis part (DREAD) can be performed to identify the most critical threats so the need for controls against these threats can be worked into the security requirements. DREAD is a risk assessment methodology that ranks threats according to their:

  • Damage potential
  • Reproducibility
  • Exploitability
  • Affected users
  • Discoverability

Each threat is then ranked in each category on a numeric scale of 1 to 3 with 1 being a threat of minimal impact and 3 being a serious threat. The scores for each category are then averaged to assign a total risk score to that particular threat (see Table 1). This allows the assessment to account for the fact that a threat with high damage potential might exist, but if it is extremely difficult to exploit and will only affect a small handful of users, it may not pose as big a risk as a threat with a more moderate damage potential, but is a trial to exploit and will affect the entire user base.

Threat D R E A D Average
XSS 3 3 2 3 3 2.6
Log Deletion 1 1 1 1 1 1

Table 1: XSS poses a bigger threat than Log Deletion in this sample risk assessment and, as such, more emphasis should be placed on securing against XSS.

Once the requirements are completed, it is time for the design phase to be started whereby the application architecture will be laid out, providing a framework around which the implementation of the software can be based. The product of the design phase should explicitly specify what security controls should be implemented and how these controls are to be implemented. For example, if the initial risk assessment identified the need for defenses against SQLi attacks, your design may specify the need for controls such as input validation and prepared statements, but it should also specify how these controls are to be implemented. For example, input validation can be done client side or server side (both have certain advantages and nothing stops you from using both), but for SQLi defense, server-side validation is a more effective security control. The design should have implementation details like this to prevent security holes from being introduced due to a developer implementing something in a less secure alternate manner because of an ambiguous design document.

What is critical about the design phase is that you take the time to review your design and have others with the proper experience and background review your design to ensure there are no overlooked weaknesses in the design. It may even behoove you to repeat your risk assessment with the new design taken into account and ensure that the risk scores for all of your threats that were identified as critical have gone down and no new critical threats have accidentally popped up. It is much easier and cheaper to fix a problem at an early stage of the SDLC than a later stage, so taking some extra care at this stage will likely pay off in the long run, especially considering that poor design contributes to some of the worst application security problems.

Upon completion of the design phase, the implementation phase can commence. This phase deals with the actual writing of the code. Before any code is actually written, secure coding standards should be in place within the organization and developers should be educated in the importance of these standards and the requirement for following them. Secure coding standards help to prevent the introduction of potential vulnerabilities such as buffer overflows because, for example, your internal standard may prevent the use of a function without explicit bounds checking, such as strcopy in C. Secure coding during the implementation phase also can be helped by making tools like static code analyzers available to developers and mandating their use before code can be committed. For example, a static code analyzer can help to identify code like the following, which is susceptible to a buffer overflow:

void function (char *str) 
   char buffer[16];
   strcpy (buffer, str);
}
int main () {
   // length of str = 27 bytes
   char *str = "I am greater than 16 bytes";
   function (str);
}

As an example, this code can be put through the free static analysis tool RATS, which is readily available for most Linux distros and we can see how the buffer overflow potential was flagged in Figure 2.

Secure2
Figure 2: RATS output for the preceding code snippet

In addition to the code analyzers, organizations also may want to insist on peer code review because such reviews can help to pick up on implementation issues whereby the code may be functionally correct but not matching the design specifications. For example, the code that calls an encryption routine may be written properly, but a code analyzer will have no way of knowing that it is only be called on to encrypt two of the three data streams that need to be encrypted. Code review can be highly useful, because after days of working on something, developers may be “too close” to their own code to have the objectivity required to pick up on any outstanding implementation issues.

Once the implementation phase has produced a functioning product, the testing stage is normally entered, whereby verification that the application functions properly occurs. This is typically done by presenting the application with a series of use cases and ensuring that each case is processed without issue and the expected outcome results. Although a good QA process may go beyond normal use cases and also include corner cases, and some cases designed to ensure the proper functioning of error handling mechanisms, security testing should go one step further and present the application with “abuse” cases designed to test the various security controls to their limits and see if they can be bypassed. Security testing of applications is often done by using a technique called fuzzing, in which a plethora of randomly generated inputs are fed into an application to try to trigger all possible code paths and see if any issues result from these inputs, such as application crashes, bypassing escaping, and so forth, because inputs that cannot properly be handled have the potential to be used as attack vectors.

Such testing is often done in the presence of dynamic analysis tools that can be used to uncover potential bugs in running applications. As with all uncovered bugs in code, an organization should have a mechanism in place for security issues identified during testing to be addressed in an expedient manner and ideally prior to release to help minimize the costs of the fix. Some organization also may choose to have a more formal penetration test done against an application. Where this is the case, make sure it is a pen tester that has application security experience because an application pen test is a different skill set than a network pen test and keep in mind that a pen test should include more than a vulnerability scan from a third-party tool. Passing a vulnerability scan does not mean your app is secure.

Eventually, the time will come when the developed application is deemed to have reached a certain level of maturity in the code base and the potential bugs in the code have been reduced to a level deemed acceptable by management. At this time, the code will begin to be released and distributed. It is important to keep in mind that no matter how careful your organization’s secure SDLC practices were adhered to and no matter how thoroughly the application was tested, it will still have bugs in it. Responsible organizations should have plans in place to deal with the identification and verification of security issues and other software bugs and have predefined procedures for patching these issues and distributing the patches in place prior to the release of the product. Such mechanisms should actually be security requirements and incorporated into the design of the product itself.

Conclusion

This article provided an overview of what a secure software development lifecycle looks like and was designed to provide guidance into how software developers could better incorporate application security into every stage of the SDLC. Security, after all, needs to be a continuous process that begins with the onset of the project’s inception and persists throughout the entire lifetime of the project.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories