GuidesSecuring Java Code: Part 1

Securing Java Code: Part 1 content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Today you can easily find comprehensive checklists for how to secure
networks and operating systems. You can hire companies that specialize in
security audits who will help run through topology and protocol. You can
download for free cracking software and scanners that search for known
operating system vulnerabilities. But what about the underlying causes
of security problems?

Software is basically the root of all common computer security problems.
Almost all external security violations are made possible by flaws in software.
People have the notion that security holes are created by criminals and hackers
to compromise systems. This is untrue. In almost all cases the holes exist
and are just exploited. Holes are the result of bad software design and
implementation, and any program, no matter how small or simple it is, can
have them. The only way to combat this is to have software building policies
that include defensive programming.

Unfortunately, a good development process with good security doesn’t have
anything to do with a good product. Few software products are designed with
security in mind at all. In today’s market, software products have a difficult
enough time completing within budget and time constraints. This creates a
serious dilemma between programmers who need to produce quickly and security
architects whose policies are tough to implement, costly, and time consuming.

Having secure code actually refers to several things. Primarily, it is
keeping your source code from being compromised, which means protecting if
from being copied, stolen, or decompiled. Additionally, code needs to be
written in a way that keeps sensitive information it holds from being looked
at, and it needs to protect the tools it uses, such as algorithms, cryptographic
keys, or passwords. Source code security also includes producing a product that
is as bug free as possible.


Security in software starts with a security policy and is important on a number
of levels. It helps programmers follow product requirements and forces coding
standard and development guidelines. A secure coding policy is especially useful
for novice or beginning programmers.

Policy is the proactive way of applying security to a product. Although
nothing is ever completely secure, policy is an essential, initial component
in any product. Although coding policies will vary greatly depending upon the
product being created the following policy suggestions can help provide an extra
layer of protection to a product’s code.

Product Requirements and Risk Management

All software products should have a requirements document that includes
security requirements. While developing security business objectives and
requirements have to be kept in mind. In the real world, there is a significant
trade off between security and cost, and security policies often interfere
with convenience. Sometimes it may be appropriate to leave security holes in
a product, and possible theft may be an acceptable business risk.

In general, systems should be designed with security in mind from the beginning,
as opposed to adding security to an already existing system. In e-commerce, for
instance, the focus is often on the exchange of data, and the encryption that is
used. But there is also the server, data transaction protocols, and client software,
all of which are part of a larger picture that needs a comprehensive security plan.

Error Handling, Failure, Bugs, and Error Conditions

Coding policies should include serious strategies on error handling. For instance,
Java supports exceptions. How and where exceptions are to be used should be listed
in the policy. How the program is going to be tested, or how defensive runs will be
practiced, should also be included in the document. Standards for how to implement
finally in catch/try blocks should be outlined, etc.

One common problem is what happens to a program when it fails. If a program falls
into insecure behavior after a failure, an attacker only has to cause a failure and
wait. Programs should have a means of failing gracefully.

Object Access Specifications

A policy should define when and how object states are to be used. Any class, method,
object, or variable that is not private is a potential entry point for an attack. By
default, everything should be private. If classes, methods, variables, and objects are
not private, the policy should address documenting why and under what circumstances.

Inner classes are translated by the Java compiler into ordinary classes and are another
item that policy should look into. These inner classes are accessible to any code in the
same package. The inner class also gets access to the fields of the enclosing outer
class, even if these fields are supposed to be private.

Non-final classes are also dangerous since they can often be extended in unseen ways.
If a non-final class is necessary, the policy document should list its why and when

Part 2 of this series will continue to look at secure coding policy and, in particular,
using cryptography and the compartmentalization of code.

References and Resources

  1. Java 2 Network Security Second Edition, Pistoia, Reller, Gupta, Nagnur, and
    Ramini, Prentice Hall, 1999.
  2. Java Security Handbook, Jamie Jaworski and Paul Perrone, SAMS Publishing,
  3. Securing Java, Gary McGraw and Ed
    Felton, John Wiley & Sons, Inc.
  4. Princeton University’s Secure Internet
    Programming Team.

About the Author

Thomas Gutschmidt is a freelance writer, in Bellevue, Wash., who also works for
Widevine Technologies.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories