JavaEnterprise JavaThoughts on Java and Open Source Security

Thoughts on Java and Open Source Security

Developer.com content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Java has been historically lucky in the open source field. Sun has been committed to community and industry participation since Java’s inception, and by opening up source code Sun has helped to foster innovation and customizing. This has cultivated a large collection of extensions and other multiple open source Java projects. However, critics to open source have strong opinions about the security of the software. Their arguments usually fall under one of two categories: Developer trust and secrecy.

Developer Trust

Simply put, the critics do not trust the developers. The concern is whether the open source code is developed with any regard to tracking, accountability, or control. There are no guarantees that any of the programmers are experts in their field, and critics wonder who exactly has had a chance to look at the source code, and whether anyone has actually invested any time or effort. They wonder what will happen when bugs and holes are found in the product, and whether there will be any accountability. They worry about the lack of documentation and official support. They may even suspect developers of being hackers planting software with exploitable holes.

The rebuttal is that open source may instead contribute to developer quality. How efficient can code review be within a closed circle? And how efficient is a review if there is only a small team of developers?

The lack of documentation and official support is a problem. However, open source usually means many eyes looking at existing code and, therefore, a higher probability of alleviating some security problems. This is a core benefit to open source development, and the hope is that bugs and exploits are more likely to be found. Unfortunately, it is easy to assume open source has been scrutinized, even if it has not. Just because the code is online does not mean that it gets free security audits by field experts. More eyes do not necessarily mean the right kind of scrutiny.

Commercial efforts may actually put code through a higher variety of testing, just to make sure the code is marketable on multiple platforms. However, the misconception here is that there is not necessarily as much high quality security auditing on commercial products as people would like to believe. Many security problems are more difficult to find than people realize.

In open source, a programmer’s personal reputation is on the line, which is not necessarily the case when a developer works for a company. Mistakes, security holes, and foul ups are more easily kept quiet. Additionally, the skills of the programmer are rarely common knowledge to the community of users. In an open source project the skills are on the line with every part of code, simply through peer review. Open source programmers also tend to be participants in their profession and in a community of developers, exchanging ideas and solutions. Closed source programmers may work in isolation, do not tend to swap ideas, and their projects are considered proprietary, or guarded as trade secrets.

Secrecy

For many, secrecy is equivalent to security. If source code is left open for anyone to peruse, any hacker could insert malicious code into the source. Since the product is fully out in the open, it is easier for hackers to find holes and bugs in the product for exploit.

The truth, however, is that there is little security gained from secrecy. Source code is closed, not for security in commercial products, but for business and proprietary reasons. Hackers still find bugs and exploits in both open and closed source. In fact, hackers probably have better debugging, analysis, and reverse engineering tools than most developers. With open source, the hope is that bugs will be found more often by the community and fixed quickly, before they are widely exploited. In cases of closed source code, security exploits are often under publicized, go unnoticed, and may not be fixed until future product releases. Often, the opening of code formally closed leads to the discovery of new or different holes or bugs.

The Real Plus to Open Source Code

Trust no one is the best security policy. Any piece of software or hardware could deliver malicious code. However, unless you build all of your software and hardware products yourself, you have to at some point trust someone.

The plus to open source software is that you can grab an application or piece of code, scrutinize the security yourself, and then modify it to your requirements. This is normally not an option for commercial products. This takes time and the skill set necessary to modify the code. It means that you need the resources to walk through what documentation is there, check with the community of experts, test the code for bugs and exploits, and make the modifications. This normally takes less time than compiling a project from scratch, but significantly more than buying and trusting a commercial product.

A Few Solutions

It is difficult to find bugs in any code of reasonable size or complexity. Open source code can be an adequate solution if you or your team can devote time and energy to debugging and possibly modifying the product code. Being able to spot the most common security mistakes in the code is the first step.

Buffer Overflow

According to CERT, there are more frequent buffer overflow holes and exploits than any other type of security problem. CERT says that nearly one quarter of all of their security advisories are based around buffer overflow. It is unfortunately common for amateur coders to not look for buffer overflows, or for them to only understand a handful of dangerous system calls. This is one of the reasons programming languages like Java and Python have gained such popularity; they can avoid these types of mishaps by utilizing automatic bounds checking.

Misapplied Cryptography

One shouldn’t assume an application is secure simply because it uses cryptography or encryption. Cryptography can be misapplied. It will not matter how strong a random number algorithm is if uses a system clock seed and a hacker can determine the exact time the number was seeded. Key exchanges are often done in insecure ways. Although one type of encryption may be strong enough for a simple use, that does not imply that it is sufficient for another. Encryption should be carefully matched with the cost of the information it is protecting. Li Gong (Sun Microsystems’ Director of Engineering and former Chief Java Security Architect) states that, “Adding cryptography to an application will not make it secure. Security is determined by the overall design and implementation of a system.”

Misunderstanding Complexity

Security problems are often extremely subtle, and may span the entire source code or project. For instance, buffer overflow exploits may only work in conjunction with each other. Looking for these holes is both complex and boring. Often they are the result of accidental, subtle information leaks, or applying applications in unforeseen ways. It is often difficult to understand the complexity involved with securing a system, or predict how the system will be used in the future with such rapidly evolving technology.

Lack of a Consistent Security Policy

Securing code begins at the policy level. Establishing consistent security guidelines and coding techniques should be the first step in securing code. Unfortunately security is often brought in ad hoc, or after a project is well underway.

Resources and References

  • Java Cryptography, Jonathon Knudsen, O’Reilly and Associates, 1998
  • Inside Java 2 Platform Security, Li Gong, Addison-Wesley, 1999

Related EarthWeb Articles

A Few Open Source Security Projects

  • Pretty Good Privacy: An international effort providing open source security, although it has received some criticism for flaws in it’s RNG generator in past versions.
  • Kerberos: An open source security protocol used for network authentication, developed at MIT. Ken Raeburn, one of the MIT developers, stated that some of the buffer overflow bugs found recently have been there over a decade, but there aren’t very many security products with a decade of open source support.
  • The Secure Electronic Transaction (SET) protocol is an open industry standard developed for the transmission of payment information over the Internet and other electronic networks. It is used by both MasterCard and Visa.
  • The U.S. Navy’s open source security project: SHADOW (Secondary Heuristic Analysis System for Defensive Online Warfare) is an Intrusion detection program. It picks up and analyzes attempts to break into computer networks, and is distributed freely online.

Cryptographic Open Source Projects

The story of Cryptography is somewhat different than commercial applications. Although the U.S. keeps tight reigns on cryptographic technology, there is a pretty large community that claims that the security of an algorithm should not depend on its secrecy.

  • Cryptix is an international, volunteer effort to produce open-source cryptographic software libraries. Development is currently focused on Java.
  • Rijandel is a cryptographic algorithm that is free and open for all purposes to all people. It is a public domain algorithm created by two German cryptographers: Joan Daemen and Vincent Rijmen. Rijandel has been chosen by the National Institute of Standards and Technology (NIST) as the advanced encryption algorithm for the 21st century. Unlike encryption in the U.S.. it is unimpaired by any political or export regulations.

About the Author

Thomas Gutschmidt is a freelance writer, in Bellevue, Wash., who also works for Widevine Technologies.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories