January 23, 2021
Hot Topics:

Transaction Management

  • By McGovern, Tyagi, Stevens, and Mathew
  • Send Email »
  • More Articles »

This material is from the book Java Web Services Architecture, written by James McGovern, Sameer Tyagi, Michael Stevens, and Sunil Mathew, published by Morgan Kaufmann Publishers.

© Copyright Morgan Kaufmann Publishers. All rights reserved.

Transaction Management

The notion of a transaction is fundamental to business systems architectures. A transaction, simply put, ensures that only agreed-upon, consistent, and acceptable state changes are made to a system—regardless of system failure or concurrent access to the system's resources.

With the advent of Web service architectures, distributed applications (macroservices) are being built by assembling existing, smaller services (microservices). The microservices are usually built with no a priori knowledge of how they may be combined. The resulting complex architectures introduce new challenges to existing transaction models. Several new standards are being proposed that specify how application servers and transaction managers implement new transaction models that accommodate the introduced complexities.

The first section of this series introduces the fundamental concepts behind transactions and explain how transactions are managed within the current Java/J2EE platforms. Later sections discuss the challenges of using existing transaction models for Web services, explain newly proposed models and standards for Web service transactions, and finally, detail proposed implementations of these new models on the Java platform.


A transaction may be thought of as an interaction with the system, resulting in a change to the system state. While the interaction is in the process of changing system state, any number of events can interrupt the interaction, leaving the state change incomplete and the system state in an inconsistent, undesirable form. Any change to system state within a transaction boundary, therefore, has to ensure that the change leaves the system in a stable and consistent state.

A transactional unit of work is one in which the following four fundamental transactional properties are satisfied: atomicity, consistency, isolation, and durability (ACID). We will examine each property in detail.


It is common to refer to a transaction as a "unit of work." In describing a transaction as a unit of work, we are describing one fundamental property of a transaction: that the activities within it must be considered indivisible—that is, atomic.

A Flute Bank customer may interact with Flute's ATM and transfer money from a checking account to a savings account. Within the Flute Bank software system, a transfer transaction involves two actions: debit of the checking account and credit to the savings account. For the transfer transaction to be successful, both actions must complete successfully. If either one fails, the transaction fails. The atomic property of transactions dictates that all individual actions that constitute a transaction must succeed for the transaction to succeed, and, conversely, that if any individual action fails, the transaction as a whole must fail.


A database or other persistent store usually defines referential and entity integrity rules to ensure that data in the store is consistent. A transaction that changes the data must ensure that the data remains in a consistent state—that data integrity rules are not violated, regardless of whether the transaction succeeded or failed. The data in the store may not be consistent during the duration of the transaction, but the inconsistency is invisible to other transactions, and consistency must be restored when the transaction completes.


When multiple transactions are in progress, one transaction may want to read the same data another transaction has changed but not committed. Until the transaction commits, the changes it has made should be treated as transient state, because the transaction could roll back the change. If other transactions read intermediate or transient states caused by a transaction in progress, additional application logic must be executed to handle the effects of some transactions having read potentially erroneous data. The isolation property of transactions dictates how concurrent transactions that act on the same subset of data behave. That is, the isolation property determines the degree to which effects of multiple transactions, acting on the same subset of application state, are isolated from each other.

At the lowest level of isolation, a transaction may read data that is in the process of being changed by another transaction but that has not yet been committed. If the first transaction is rolled back, the transaction that read the data would have read a value that was not committed. This level of isolation—read uncommitted, or "dirty read"—can cause erroneous results but ensures the highest concurrency.

An isolation of read committed ensures that a transaction can read only data that has been committed. This level of isolation is more restrictive (and consequently provides less concurrency) than a read uncommitted isolation level and helps avoid the problem associated with the latter level of isolation.

An isolation level of repeatable read signifies that a transaction that read a piece of data is guaranteed that the data will not be changed by another transaction until the transaction completes. The name "repeatable read" for this level of isolation comes from the fact that a transaction with this isolation level can read the same data repeatedly and be guaranteed to see the same value.

The most restrictive form of isolation is serializable. This level of isolation combines the properties of repeatable-read and read-committed isolation levels, effectively ensuring that transactions that act on the same piece of data are serialized and will not execute concurrently.


The durability property of transactions refers to the fact that the effect of a transaction must endure beyond the life of a transaction and application. That is, state changes made within a transactional boundary must be persisted onto permanent storage media, such as disks, databases, or file systems. If the application fails after the transaction has committed, the system should guarantee that the effects of the transaction will be visible when the application restarts. Transactional resources are also recoverable: should the persisted data be destroyed, recovery procedures can be executed to recover the data to a point in time (provided the necessary administrative tasks were properly executed). Any change committed by one transaction must be durable until another valid transaction changes the data.

Isolation Levels and Locking

Traditionally, transaction isolation levels are achieved by taking locks on the data that they access until the transaction completes. There are two primary modes for taking locks: optimistic and pessimistic. These two modes are necessitated by the fact that when a transaction accesses data, its intention to change (or not change) the data may not be readily apparent.

Some systems take a pessimistic approach and lock the data so that other transactions may read but not update the data accessed by the first transaction until the first transaction completes. Pessimistic locking guarantees that the first transaction can always apply a change to the data it first accessed.

In an optimistic locking mode, the first transaction accesses data but does not take a lock on it. A second transaction may change the data while the first transaction is in progress. If the first transaction later decides to change the data it accessed, it has to detect the fact that the data is now changed and inform the initiator of the fact. In optimistic locking, therefore, the fact that a transaction accessed data first does not guarantee that it can, at a later stage, update it.

At the most fundamental level, locks can be classified into (in increasingly restrictive order) shared, update, and exclusive locks. A shared lock signifies that another transaction can take an update or another shared lock on the same piece of data. Shared locks are used when data is read (usually in pessimistic locking mode).

An update lock ensures that another transaction can take only a shared lock on the same data. Update locks are held by transactions that intend to change data (not just read it).

If a transaction locks a piece of data with an exclusive lock, no other transaction may take a lock on the data. For example, a transaction with an isolation level of read uncommitted does not result in any locks on the data read by the transaction, and a transaction with repeatable read isolation can take only a share lock on data it has read.

Locking to achieve transaction isolation may not be practical for all transactional environments; however, it remains the most common mechanism to achieve transaction isolation.

Page 1 of 3

This article was originally published on August 8, 2003

Enterprise Development Update

Don't miss an article. Subscribe to our newsletter below.

Thanks for your registration, follow us on our social networks to keep up-to-date