February 24, 2021
Hot Topics:

Performance Improvements: Caching

  • By Rob Bogue
  • Send Email »
  • More Articles »

If you're looking at performance and you want to get some quick wins, the obvious place to start is caching. Caching as a concept is focused exclusively around improving performance. It's been used in disk controllers, processors, and other hardware devices since nearly the beginning of computing. Various software methods have been devised to do caching as well. Fundamentally caching has one limitation — managing updates — and several decisions. In this article, we'll explore the basic options for caching and their impact on performance.

Note:You can learn about other performance improvements in the articles, Performance Improvements: Understanding and Performance Improvements: Session State.

Cache Updates

Caching replaces slower operations—like making calls to a SQL server to an instantiate an object—with faster operations like reading a serialized copy of the object from memory. This can dramatically improve the performance of reading the object; however, what happens when the object changes from time to time? Take, for instance, a common scenario where the user has a cart of products. It's normal, and encouraged, for the user to change what's in their shopping cart. However, if you're displaying a quick summary of the items in a user's cart on each page, it may not be something that you want to read from the database each time.

That's where managing cache updates comes in, you have to decide how to manage these updates and still have a cache. At the highest level you have two different strategies. The first set of strategies is a synchronized approach where it's important to maintain synchronization of the object at all times. The second strategy is a lazy (or time) based updating strategy where having a completely up-to-date object is nice but not essential. Say for instance that you had an update to a product's description to include a new award the product has won — that may not be essential to know about immediately in the application.

Synchronized Update

With a synchronized update, sometimes called coherent cache, you're trying to make sure that updates are synchronized between all of the consumers of the cache. When you're on a single server this is pretty easy. You have access to an in-memory cache and you just clear the value and update it with the new value. However, in a farm where you have multiple servers you have a substantially harder problem to solve. You must now use some sort of a coordination point (like a database, out of process memory store, etc.) and take the performance impact of that synchronization, or you must develop a coordination strategy for the cache.

The problem with the synchronized approach is that synchronization reduces the performance of the cache, which is of course the whole reason why it exists. There are implementations of this strategy that require only a small of the synchronization data to be shared, and there are strategies that essentially move the cache into a serialized database not unlike persisted session state. In either strategy the impact of synchronization is not trivial.

The synchronization strategy is good when there's a relatively large amount of change in the cache. For instance, if you needed to know the last few products a user looked at, that's going to change rapidly, then you may want not want to use a synchronized strategy for that data.

Another approach to synchronized cache is to use a coordination approach. With this approach you believe that the number of updates will be small and because of that you require extra actions when the cache is updated, rather than requiring extra activities on every (or nearly every) read. In general the kinds of data that you want to cache have relatively low volatility (rate of change) so the coordination approach to keeping cache's synchronized is often the best choice for synchronized cache. In this situation, when the object changes, and that change is persisted to whatever the back end store is, the object notifies the cache manager and the cache manager notifies all of the servers running the application.

For instance, in a farm with three servers, the cache manager clears the cache on the currently running servers and then signals the other cache managers — say via a web service — that they should clear their cache for a specific object. The number of calls needed obviously increases the larger the farm but as a practical matter the frequency is relatively low.

The final approach to a synchronized cache is having it client managed via a cookie. This strategy is really only viable for user data/session cache when the cached data is small (so that it can be retransmitted without impact), volatile (so that another strategy won't work well), and non-sensitive (so that the user having the information doesn't have any potential security implications.) The example of the recently visited products is a great example of a cache that could be pushed down to the client. This solves the need for it to be managed in cache.

Lazy Update

In some situations such as the product description changing to include a new award for the product the change needs to be visible to all of the members of the farm and in every session eventually but the applications usefulness doesn't rely upon an absolute up to the minute cache to function correctly. In these situations there are two basic approaches that can be applied based on your scenario. First, you can allow the cache to expire after a certain passage of time. For instance, caching information about a product is useful but perhaps you decide that the product should only be cached for ten minutes. This decision is made because most of the time if the product isn't accessed within the previous ten minutes it's not needed any longer.

Depending upon your scenario you can choose to apply a fixed window (e.g. every ten minutes the product object in memory is updated ) or a sliding window (e.g. only after ten minutes of inactivity is the product object in memory updated). Of course, we're really talking about expiring the cache so in both scenarios it's really just that the objects will be cleared from the cache at those intervals and when they are reloaded or regenerated could potentially be much longer, but the older data would only be returned while the object is in cache.

When you know specifically when the object data is going to change, say for instance during a batched update in the middle of the night, you can choose to update the entire cache, generally for a class of objects, at the same time. The effect of this is that you are able to maintain cache efficiency except for the onetime per day when you know that updates are happening.


The final scenario that introduces a bit of awareness in your objects that caching is involved is to allow for a read-through strategy where the consumer of the cached objects can specifically request that the object be read from “source of truth” rather than relying upon the cache. This can be useful when there are specific scenarios where it’s critical to know the exact right answer. For instance, consider the importance of having the exact right value during a checkout on a commerce site.

Read-Through strategies don’t solve the issues around a cache getting stale. They just recognize that there are specific times when read operations are critical and other situations where they’re less critical. In recognizing this the length of time that a cache can be scale can be longer since critical operations will still use the correct data.

Page 1 of 2

This article was originally published on July 27, 2009

Enterprise Development Update

Don't miss an article. Subscribe to our newsletter below.

Thanks for your registration, follow us on our social networks to keep up-to-date