Open source software projects can be more secure than closed source projects. However, the very things that can make open source programs secure — the availability of the source code, and the fact that large numbers of users are available to look for and fix security holes — can also lull people into a false sense of security.
The core open source phenomenon responsible for making code secure is the “many eyeballs” effect. With lots of people scrutinizing a program’s source code, bugs — and security problems — are more likely to be found.
Why do programmers look at source code? Mostly for their own benefit: they’ve found a piece of open source software useful, and they want to improve or change it for their own specific needs. Sometimes too, source code attracts scrutiny just to make sure it meets certain needs, even when there’s no intention of modifying it. Companies which require a high level of security, for example, might do a code review as part of a security audit. This could be done for any software product where the source is available, of course, regardless of whether it’s open source or produced commercially.
Source code can also attract programmers’ eyeballs simply for reasons of personal gain. Some people may explicitly wish to find security problems in the code. Perhaps they want to build a name for themselves in the security community. Maybe they’re motivated by altruism or a belief that others should be aware of security holes. Earlier this month, for example, two hackers broke into the open source Apache Software Foundation Web site, posted a Microsoft logo on it, and then published an explanation of how an improperly configured FTP server allowed them access. Many others share information about security vulnerabilities in less intrusive ways, such as posting to discussions on the Bugtraq mailing list. And, unfortunately, there will probably always be some people scrutinizing source code because they want an attack that no one else has — in which case, you’re not likely to gain much from their eyeballs.
Eyes that Look Do not Always See
|"Everyone using Mailman, apparently, assumed that someone else had done the proper security auditing.
With people motivated to look at the source code for any number of reasons, it’s easy to assume that open source software is likely to have been carefully scrutinized, and that it’s secure as a result. Unfortunately, that’s not necessarily true.
Lots of things can discourage people from reviewing source code. One obvious deterrent: if the code looks like a big tangled mess, you’ll get fewer eyeballs on it. And as we discovered while writing Mailman, the GNU mailing list manager, anything that makes it harder for the average open source user to hack means fewer eyeballs. We wrote Mailman in Python, which is nowhere near as popular as C, and often heard from people who would have liked to help with the development, but did not want to have to learn Python to do it.
People using open source programs are most likely to look at the source code when they notice something they’d like to change. Unfortunately, that doesn’t mean the program gets free security audits by people good at such things. It gets eyeballs looking at the parts of the code they want to change. Often, that’s only a small part of the code. What’s more, programmers preoccupied with adding a feature generally aren’t thinking much about security when they’re looking at the code.
And, unfortunately, software developers sometimes have a tendency to ignore security up front and try to bolt it on afterwards. Even worse, most developers don’t necessarily know much about security. Many programmers know a bit about buffer overflows, and are probably aware of a handful of functions that should be avoided. But many of them don’t understand buffer overflows enough to avoid problems beyond the handful of dangerous calls they know. And when it comes to flaws other than buffer overflows, the problem gets worse. For example, it is common for developers to use cryptography, but misapply it in ways that destroy the security of a system, and it is also common for developers to add subtle information leaks to their programs accidently. It’s really common to use encryption that is too weak and can easily be broken. It’s also common for people to exchange cryptography keys in a way that’s actually insecure. People often try to hand roll their own protocols using common cryptographic primitives. But cryptographic protocols are generally more complex than one would expect, and are easy to get wrong.
Far too Trusting
|"Until this week, the version of Mailman which contains these security holes was included in Red Hat Professional Linux version 6.2.
So despite the conventional wisdom, the fact that many eyeballs are looking at a piece of software is not likely to make it more secure. It is likely, however, to make people believe that it is secure. The result is an open source community that is probably far too trusting when it comes to security.
Take the case of the open source mailing list manager Mailman, which I helped write. Mailman is in use running mailing lists at an impressive number of sites. For three years, until March 2000, Mailman had a handful of glaring security problems in code that I wrote before I knew much about security. An attacker could use these security holes to gain access to the operating system on Linux computers running the program.
These were not obscure bugs: anyone armed with the Unix command grep and an iota of security knowledge could have found them in seconds. Even though Mailman was downloaded and installed thousands of times during that time period, no one reported a thing. I finally realized there were problems as I started to learn more about security. Everyone using Mailman, apparently, assumed that someone else had done the proper security auditing, when, in fact, no one had.
And if three years seems like a long time for security holes to go undetected, consider the case of Kerberos, an Open Source security protocol for doing authentication. According to Ken Raeburn, one of the developers of the MIT Kerberos implementation, some of the buffer overflows recently found in that package have been there for over ten years.
The many eyeballs approach clearly failed for Mailman. And as open source programs are increasingly packaged and sold as products, users — particularly those who are not familiar with the open source world — may well assume that the vendor they are buying the product from has done some sort of security check on it.
Until this week, for example, version 1.0 of Mailman, which contains these security holes, was included in Red Hat Professional Linux version 6.2. (If you’re running a Mailman version earlier than 2.0 beta, allow me to suggest that you upgrade immediately. The latest version can be found on the Mailman Web site at http://www.list.org).
Security: Tougher than It Looks
Even if you get the right kind of people doing the right kinds of things, you may have problems that you never hear about. Security problems are often incredibly subtle, and may span large parts of a source tree. It is not uncommon to have two or three features spread throughout a program, none of which constitutes a security problem alone, but which can be used together to perform a security breach. For example, two buffer overflows recently found in Kerberos version 5 could only be exploited when used in conjunction with each other.
As a result, doing security reviews of source code tends to be complex and boring, since you generally have to look at a lot of code, and understand it pretty well. Even many experts don’t like to do these kinds of reviews.
And even the experts can miss things. Consider the case of the popular open source FTP server wu-ftpd. In the past two years, several very subtle buffer overflow problems have been found in the code. Almost all of these problems had been in the code for years, despite the fact that the program had been examined many times by both hackers and security auditors. If any of them had discovered the problems, they didn’t announce it publicly. In fact, the wu-ftpd has been used as a case study for vulnerability detection techniques that never identified these problems as definite flaws. One tool was able to identify one of the problems as potentially exploitable, but researchers examined the code thoroughly for a couple of days, and came to the conclusion that there was no way that the problem identified by their tool could actually be exploited. Over a year later, they learned that they were wrong, when an expert audit finally did turn up the problem.
In code with any reasonable complexity, it can be very difficult to find bugs. The wu-ftpd is less than 8000 lines of code long, but it was easy for several bugs to remain hidden in that small space over long periods of time.
To compound the problem, even when people know about security holes, they may not get fixed, at least not right away. Even when identified, the security problems in Mailman took many months to fix, because security was not the the core development team’s most immediate concern. In fact, the team believes one problem still persists in the code, but only in a configuration that we suspect doesn’t get used.
An Army in My Belly
The single most pernicious problem in computer security today is the buffer overflow. While the availability of source code has clearly reduced the number of buffer overflow problems in open source programs, according to several sources, including CERT, buffer overflows still account for at least a quarter of all security advisories, year after year.
Open source proponents sometimes claim that the “many eyeballs” phenomenon prevents Trojan horses from being introduced in open source software. The speed with which the TCP wrappers Trojan was discovered in early 1999 is sometimes cited as supporting evidence. This too can lull the open source movement into a false sense of security, however, since the TCP wrappers Trojan is not a good example of a truly stealthy Trojan horse: the code was glaringly out of place and obviously put there for malicious purposes only. It was as if the original Trojan horse had been wheeled into Troy with a sign attached that said, “I’ve got an army in my belly!”
Well-crafted Trojans are quite different. They generally look like ordinary bugs with security implications, and are very subtle. Take, for example, wu-ftpd. Who is to say that one of the buffer overflows that have been found recently was not a Trojan horse introduced years ago when the distribution site was hacked?
The open source movement hasn’t made the problem of buffer overflows go away. But eventually, newer programming languages may; unlike C, modern programming languages like Java or Python never have buffer overflow problems, because they do automatic bounds checking on array accesses. As with any technology, fixing the root of the problem is far more effective than any ad hoc solution.
Is Closed Source Any More Secure?
Critics of open source software might say that providing source code makes the job of the malicious attacker easier. If only a binary is available, the bar has been raised high enough to send most such people looking for lower-hanging fruit. But as the many well-publicized security holes in commercial software make clear, attackers can find problems without the source code; it just takes longer. From a security point of view, the advantages of having the source code available for everyone to see far outweighs any benefit hackers may gain.
There are many benefits of open source software unrelated to security. And the “many eyeballs” effect does have the potential to make open source software more secure than proprietary systems. Currently, however, the benefits open source provides in terms of security are vastly overrated, because there isn’t as much high-quality auditing as people believe, and because many security problems are much more difficult to find than people realize. Open source programs which appeal to a limited audience are particularly at risk, because of the smaller number of eyeballs looking at the code. But all open source software is vulnerable, and the open source movement can only benefit by paying more attention to security.
About the Author
John Viega is a research associate at Reliable Software Technologies, in Sterling, Va. He holds an M.S. in Computer Science from the University of Virginia. He developed and maintains Mailman, the Gnu mailing list manager. His research interests include software assurance, programming languages, and object-oriented systems.