Secure Design Principles, Page 4
Secure by Default
When you design a system, it should, by default, be optimized for security wherever possible. One problem that some software vendors have had in the past is that when they deploy their software, they turn on every possible feature, and make every service available to the user by default. From a security standpoint, the more features that are built into a piece of software, the more susceptible it is going to be to an attack. For example, if an attacker is trying to observe the behavior of the application, the more features and functionality one makes available, the more the bad guy can observe. There is a higher probability that the attacker is going to find some potential security vulnerability within any of those given features. A rule of thumb when figuring out what features to make available to the user population by default is that you should only enable the 20 percent of the features that are used by 80 percent of the users. That way, most of the users are very happy with the initial configuration that the software will have. The other 20 percent-the power users-that take advantage of the extra functionality in the product will have to explicitly turn those features on, but that is acceptable because they are the power users anyway, and will not have any problem doing so!
Another related idea you should be familiar with is the term hardening a system. An operating system, for instance, can contain a lot of features and functionality when it is shipped by the OS vendor, but the amount of functionality available should be reduced. The reduction involves turning off all unnecessary services by default. For instance, in our book Foundations of Security: What Every Programmer Needs to Know, we describe how a malicious program called the Morris worm took advantage of unhardened UNIX systems that had an unnecessary "debugging" feature enabled in its mail routing program. The high-level idea here is that because there are more features enabled, there are more potential security exploits. By default, you should turn off as many things as you can and have the default configuration be as secure as it possibly can.
In newer versions of Windows, Microsoft has turned IIS, as well as many other features in the operating system, off by default. This drastically reduces the ability of a worm to spread over the network. Code Red and Nimda were able to infect thousands of computers within hours because the IIS web server was on by default. Hardening the initial configuration of Windows is one example of how keeping features off by default helps reduce the security threat posed by worms.
Keeping software as simple as possible is another way to preserve software security. Complex software is likely to have many more bugs and security holes than simple software. Code should be written so that it is possible to test each function in isolation.
One example of a large, complicated piece of software that has had many security holes is the UNIX sendmail program (www.sendmail.org). The sendmail program is installed on many UNIX servers deployed on the Internet, and its goal is to route mail from a sender to a recipient.
The simpler the design of a program and the fewer lines of code, the better. A simpler design and fewer lines of code can mean less complexity, better understandability, and better auditability. That does not mean that you should artificially make code compact and unreadable. It means that you should avoid unnecessary mechanisms in your code in favor of simplicity.
In order to keep software simple and security checks localized, you can take advantage of a concept called a choke point. A choke point is a centralized piece of code through which control must pass. You could, for instance, force all security operations in a piece of software to go through one piece of code. For example, you should only have one checkPassword() function in your system-all password checks should be centralized, and the code that does the password check should be as small and simple as possible so that it can be easily reviewed for correctness. The advantage is that the system is more likely to be secure as long as the code is correct. This is all built on the concept that the less functionality one has to look at in a given application, the less security exposure and vulnerability that piece of software will have. Software that is simple will be easier to test and keep secure.
Usability is also an important design goal. For a software product to be usable, its users, with high probability, should be able to accomplish tasks that the software is meant to assist them in carrying out. The way to achieve usable software is not to build a software product first, and then bring in an interaction designer or usability engineer to recommend tweaks to the user interface. Instead, to design usable software products, interaction designers and usability engineers should be brought in at the start of the project to architect the information and task flow to be intuitive to the user.
There are a few items to keep in mind regarding the interaction between usability and security:
- Do not rely on documentation. The first item to keep in mind is that users generally will not read the documentation or user manual. If you build security features into the software product and turn them off by default, you can be relatively sure that they will not be turned on, even if you tell users how and why to do so in the documentation.
- Secure by default. Unlike many other product features that should be turned off by default, security features should be turned on by default, or else they will rarely be enabled at all. The challenge here is to design security features that are easy enough to use that they provide security advantages, and are not inconvenient to the point that users will shut them off or work around them in some way. For instance, requiring a user to choose a relatively strong but usable password when they first power up a computer, and enter it at the time of each login might be reasonable. However, requiring a user to conduct a two-factor authentication every time that the screen locks will probably result in a feature being disabled or the computer being returned to the manufacturer. If the users attempt to do something that is insecure, and they are unable to perform the insecure action, it will at least prevent them from shooting themselves in the foot. It may encourage them to read the documentation before attempting to conduct a highly sensitive operation. Or it may even encourage them to complain to the manufacturer to make the product easier to use.
- Remember that users will often ignore security if given the choice. If you build a security prompt into a software product, such as a dialog box that pops up in front of the users saying, "The action that you are about to conduct may be insecure. Would you like to do it anyway?" a user will most likely ignore it and click "Yes." Therefore, you should employ secure-by-default features that do not allow the user to commit insecure actions. These default features should not bother asking the user's permission to proceed with the potentially insecure action. The usability of the application may be negatively impacted, but it will also lead to better security. It also probably means that the product should be redesigned or refactored to assist users in carrying out the task they seek to accomplish in a more secure fashion. Remember that if users are denied the ability to carry out their work due to security restrictions, they will eventually find a way to work around the software, and that could create an insecure situation in itself. The balance between usability and security should be carefully maintained.
The usability challenge for security software products seems to be greater than for other types of products. In a seminal paper entitled "Why Johnny Can't Encrypt," Alma Whitten and Doug Tygar conducted a usability study of PGP (a software product for sending and receiving encrypted e-mail) and concluded that most users were not able to successfully send or receive encrypted e-mail, even if the user interface for the product seemed "reasonable." Even worse, many of the users in their tests conducted actions that compromised the security of the sensitive e-mail with which they were tasked to send and receive. Whitten and Tygar concluded that a more particular notion of "usability for security" was important to consider in the design of the product if it were to be both usable and secure (Whitten and Tygar 1999).
Quoting from their paper, "Security software is usable if the people who are expected to be using it: (1) are reliably made aware of the security tasks they need to perform; (2) are able to figure out how to successfully perform those tasks; (3) don't make dangerous errors; and (4) are sufficiently comfortable with the interface to continue using it."
Page 4 of 5