JavaEnterprise JavaImprove J2EE Application Performance with Caching

Improve J2EE Application Performance with Caching

Developer.com content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Caching is one of the tried-and-true techniques for improving the efficiency of an application. That’s especially true in an enterprise environment, where the range of users can include remote clients accessing the app in a variety of modes, and in integrated apps where several departments are constantly grabbing at the same data.

If your enterprise framework is J2EE, caching is something you’ll want to make frequent use of; but you’ll also want to be thoughtful about where and when you use it.

The concept of temporarily storing frequently-referenced data at your application’s elbow to save the overhead of repeated trips into the database is so well-entrenched that many databases can be enabled to do it passively. And within J2EE, a J2EE server can passively do entity bean caching, if you’re using entity beans to do data access.

But as the app developer in an enterprise environment, you’ll soon see that these measures aren’t enough to get your apps to the efficiency level they need to be. RDBMS caching doesn’t get you the performance you need, because there’s still a hit between the database and J2EE server; and entity bean caching happens way down the call path.

Go active as well as passive

In addition to these passive steps, you can get aggressive with these J2EE caching tricks:

  • Try JSP’s cache tags. JSP pages can be easily cached, using tags that are freely available in JSP libraries. This is a solid efficiency move within the J2EE server, keeping a large number of users happy with a minimum of effort. But you only want to go this route if JSP is your data presentation mechanism.
  • Servlet 2.4 caching filter. This one’s free (lots of implementations available, you can Google it), easy and gives you a high return. It amounts to a request filter that can intercept requests for pages known to contain static reference data – and instead of going to the actual page containing the data; it will return the cached version.
  • Turn repetitive data into Java objects. This is trickier and requires more effort on your part, but it’s also a lot more fun. There’s an API to help you do it called JCache ( www.jcp.org/jsr/detail/107.jsp ). The advantage is that the cached objects live in a single tier; the downside is that you must be cautious not to use this technique if the cached data is likely to be updated (in which case you have invalid cached data and must undo the object or come up with a mechanism for updating it as an object).

The time and place for caching in J2EE

Caching repetitive data isn’t just strategic; it’s also very tactical, and a caching strategy is only effective if it is approached with an awareness of the framework within which it’s being implemented. In J2EE, that means being sensitive to your architecture’s tiers, your server configuration and the complexity trade-offs involved.

For instance, a good general rule of thumb is to implement caching when it eliminates the need for a remote call; and a good J2EE-specific rule of thumb is to implement caching when it eliminates a need for one tier to make a call to an underlying tier.

As the app developer, you’re the one best suited to evaluate the trade-offs involved in placement of a cache implementation. There are a number of things to consider:

Does caching at a particular point reduce network activity significantly? This is one of your more important considerations. If the answer is a resounding yes, then you have a strong case for caching.

Is the amount of data being cached going to be manageable? Don’t assume that it is. If, for instance, you consider caching an entire database table that happens to be the most popular table in the company, you do yourself no favors if you hike it up from the database to the J2EE server, only to bury the server because the table has several million lines, and everyone who looks at it wants to look at something different. You’re piling inefficiency on top of a dubious efficiency; yes, it’s more work to send the user all the way into the database, but on the other hand the database is designed to offer up small subsets of table data.

Is the data to be cached read-only, and how static is it? This depends on the nature of the data, the applications it serves, and the users’ needs. The rule of thumb is this: as the ‘static/dynamic’ ratio sways in favor of ‘dynamic,’ the complexity of caching increases, because you have to detect differences between the cached data and the source data, if the source data is frequently updated. And if the source of the cached data is being actively written to, rather than read-only, then concurrency might be an issue. In either case, complexity increases, and you must decide if it’s worth your effort to cache.

How much complexity are you adding to the application to support the caching function? This ties in to the read/write question. Are you introducing threading issues, or the possibility of errors in the query process? What will be the consequence to the user or application if queries fail?

What is the consequence of faulty data? If a difference between a cached data item and the source item is critical to the user, then don’t risk caching. If the difference is trivial, or not particularly time-sensitive, you’re on safer ground.

The important take-home point in implementing caching is that you can only use it effectively if you have a thorough understanding of your users’ business needs and a strong sense of your J2EE application environment’s architecture and database interfaces. There is no blanket policy regarding caching; it’s not a data rake or shovel, but a tool for fine work. Use it with considerable forethought and your apps will speed up considerably.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories