August 1, 2014
Hot Topics:
RSS RSS feed Download our iPhone app

The Myth of Open Source Security

  • May 26, 2000
  • By John Viega
  • Send Email »
  • More Articles »

Open source software projects can be more secure than closed source projects. However, the very things that can make open source programs secure -- the availability of the source code, and the fact that large numbers of users are available to look for and fix security holes -- can also lull people into a false sense of security.

Many Eyeballs

The core open source phenomenon responsible for making code secure is the "many eyeballs" effect. With lots of people scrutinizing a program's source code, bugs -- and security problems -- are more likely to be found.

Why do programmers look at source code? Mostly for their own benefit: they've found a piece of open source software useful, and they want to improve or change it for their own specific needs. Sometimes too, source code attracts scrutiny just to make sure it meets certain needs, even when there's no intention of modifying it. Companies which require a high level of security, for example, might do a code review as part of a security audit. This could be done for any software product where the source is available, of course, regardless of whether it's open source or produced commercially.

Source code can also attract programmers' eyeballs simply for reasons of personal gain. Some people may explicitly wish to find security problems in the code. Perhaps they want to build a name for themselves in the security community. Maybe they're motivated by altruism or a belief that others should be aware of security holes. Earlier this month, for example, two hackers broke into the open source Apache Software Foundation Web site, posted a Microsoft logo on it, and then published an explanation of how an improperly configured FTP server allowed them access. Many others share information about security vulnerabilities in less intrusive ways, such as posting to discussions on the Bugtraq mailing list. And, unfortunately, there will probably always be some people scrutinizing source code because they want an attack that no one else has -- in which case, you're not likely to gain much from their eyeballs.

Eyes that Look Do not Always See

"Everyone using Mailman, apparently, assumed that someone else had done the proper security auditing. "

With people motivated to look at the source code for any number of reasons, it's easy to assume that open source software is likely to have been carefully scrutinized, and that it's secure as a result. Unfortunately, that's not necessarily true.

Lots of things can discourage people from reviewing source code. One obvious deterrent: if the code looks like a big tangled mess, you'll get fewer eyeballs on it. And as we discovered while writing Mailman, the GNU mailing list manager, anything that makes it harder for the average open source user to hack means fewer eyeballs. We wrote Mailman in Python, which is nowhere near as popular as C, and often heard from people who would have liked to help with the development, but did not want to have to learn Python to do it.

People using open source programs are most likely to look at the source code when they notice something they'd like to change. Unfortunately, that doesn't mean the program gets free security audits by people good at such things. It gets eyeballs looking at the parts of the code they want to change. Often, that's only a small part of the code. What's more, programmers preoccupied with adding a feature generally aren't thinking much about security when they're looking at the code.

And, unfortunately, software developers sometimes have a tendency to ignore security up front and try to bolt it on afterwards. Even worse, most developers don't necessarily know much about security. Many programmers know a bit about buffer overflows, and are probably aware of a handful of functions that should be avoided. But many of them don't understand buffer overflows enough to avoid problems beyond the handful of dangerous calls they know. And when it comes to flaws other than buffer overflows, the problem gets worse. For example, it is common for developers to use cryptography, but misapply it in ways that destroy the security of a system, and it is also common for developers to add subtle information leaks to their programs accidently. It's really common to use encryption that is too weak and can easily be broken. It's also common for people to exchange cryptography keys in a way that's actually insecure. People often try to hand roll their own protocols using common cryptographic primitives. But cryptographic protocols are generally more complex than one would expect, and are easy to get wrong.

Far too Trusting

"Until this week, the version of Mailman which contains these security holes was included in Red Hat Professional Linux version 6.2. "

So despite the conventional wisdom, the fact that many eyeballs are looking at a piece of software is not likely to make it more secure. It is likely, however, to make people believe that it is secure. The result is an open source community that is probably far too trusting when it comes to security.

Take the case of the open source mailing list manager Mailman, which I helped write. Mailman is in use running mailing lists at an impressive number of sites. For three years, until March 2000, Mailman had a handful of glaring security problems in code that I wrote before I knew much about security. An attacker could use these security holes to gain access to the operating system on Linux computers running the program.

These were not obscure bugs: anyone armed with the Unix command grep and an iota of security knowledge could have found them in seconds. Even though Mailman was downloaded and installed thousands of times during that time period, no one reported a thing. I finally realized there were problems as I started to learn more about security. Everyone using Mailman, apparently, assumed that someone else had done the proper security auditing, when, in fact, no one had.

And if three years seems like a long time for security holes to go undetected, consider the case of Kerberos, an Open Source security protocol for doing authentication. According to Ken Raeburn, one of the developers of the MIT Kerberos implementation, some of the buffer overflows recently found in that package have been there for over ten years.

The many eyeballs approach clearly failed for Mailman. And as open source programs are increasingly packaged and sold as products, users -- particularly those who are not familiar with the open source world -- may well assume that the vendor they are buying the product from has done some sort of security check on it.

Until this week, for example, version 1.0 of Mailman, which contains these security holes, was included in Red Hat Professional Linux version 6.2. (If you're running a Mailman version earlier than 2.0 beta, allow me to suggest that you upgrade immediately. The latest version can be found on the Mailman Web site at http://www.list.org).





Page 1 of 2



Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 


Sitemap | Contact Us

Rocket Fuel