In our last installment, we introduced policy and covered product requirements, error handling, and object states. Part two will finish discussing elements that should be part of a secure Java code policy.
Sensitive Information and Engaging Memory
Sensitive information refers mainly to passwords, algorithms, and cryptographic keys, but could mean any sort of information a product uses that is not intended for an end user to see. Ideally, this kind of data should never be hard-coded, or put into permanent memory, which can be difficult, because some of Java’s other security features, such as automatic garbage collection.
There should be standards for incorporating legacy software if necessary. This is crucial if the legacy supports different encryption, or lesser security standards. The level of trust between a new product and a legacy product must be addressed.
Access and Authentication
In general, permissions should only be granted if absolutely necessary, and then only for the minimum amount of time needed to do a particular job. Unix policies, for example, use this tact by limiting the use of root. When granting permissions, always start with the least amount of privilege: a program should do a job with the least amount of privilege, and then permissions should be released right after the job is done.
Unfortunately, poor authentication or authentication policy is a common problem. For instance, password-based mechanisms will frequently possess no creation guidelines. Programmers should also be careful to not leave any secrets in code by hard coding cryptographic keys or passwords. Right after passwords or other sensitive info in memory is used, it should be erased.
Documentation is important for coding, debugging, and is extremely useful in later litigation or copyright protection.
You can minimize damage from security breaches by having different and specific components or objects. Use as many isolated units as possible when coding; and in particular, security-related code should be funneled into small APIs, so you can tightly control the security policies.
Signed code should be put in one archive file. One common type of attack is a mix-and-match attack, where an attacker constructs a new applet or library that links some of your signed classes together with malicious classes or links two classes that were never meant to be together. Putting signed code in one archive file doesn’t completely solve this problem, but it helps.
Many coders assume that cryptography is the big hammer. However, it is just one link in the security chain. You may have a nearly unbreakable cryptographic code, but problems in the code could make it vulnerable. For example, if passwords are stored in a hard to break hash, but there are no password rules, then they are easy to guess. Since cryptography is usually not the weakest link in a program, attempts to break through it are also rare. Often there will be easier back doors to use to break through a project.
First, make sure that the cryptography you use will not revert if the program’s setup errs or if the program fails. Second, avoid rolling your own algorithms, stick with something that has stood up to scrutiny and has been thoroughly tested. Finally, policy should specify which algorithms should be used for encryption, list who and what portions of a program are expected to have key material, and list where the cryptographic keys come from and how the keys are protected.
Input Checking and Client Calls
Input checking is extremely important. Clients can be replaced, cookies can be modified, and DNS and IP addresses can be spoofed. An attacker can change the binary of a client even if you originally wrote it. Never assume that the standard libraries or operating system will be secure, they may have been tampered with.
Avoid all this kind of outside data input if possible. Always run security checks from all clients, and make sure you do error checking. Define the allowable sets of characters for input. And avoid calls that invoke the operating system if possible.
Random-number generators are another area where programmers often falter. Running seed values can be risky. If a programmer is not careful, a hacker can set the seed value to give the same value each time. Or if he/she can guess the seed, then he/she can predict all of the results more easily. Pseudo-random numbers give 2 to the 23rd level seeds, which isn’t very high, and most programs have seeds defaulted to the current system clock, which means you can predict results if you know when the seed was generated.
In order to be truly random, you need to find data that is hard to predict, and then hash the data to get rid of any potential patterns. This is easier if you are programming something that is hardware specific, which adds an extra layer of difficulty in Java. Good sources of random data can be gained from kernel state information, or network traffic. Java’s
SecureRandom() class, which uses thread timing for it’s seeds, actually has a lot of industry confidence.
Being aware of Java’s limitations is also important when putting together a policy. Java’s Security manager does not protect against allocating memory until completely out of memory, or firing multiple threads until memory slows down to a crawl in typical-style Denial of Service attacks.
In part three, we will take a closer look at memory management and look at how to protect sensitive information within Java code.
References and Resources
- Java 2 Network Security, Second Edition, Pistoia, Reller, Gupta, Nagnur, and Ramini, Prentice Hall, 1999.
- Java Security Handbook, Jamie Jaworski and Paul Perrone, SAMS Publishing, 2000.
- Securing Java, Gary McGraw and Ed Felton, John Wiley & Sons, Inc., 1999.
- Princeton University’s Secure Internet Programming Team.
About the Author
Thomas Gutschmidt is a freelance writer, in Bellevue, Wash., who also works for Widevine Technologies.