Architecture & DesignUnderstanding the Java Thread Model

Understanding the Java Thread Model content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

A thread in a Java program runs asynchronously as a independent path of execution. It is basically a subset of code designed to execute simultaneously in sync with other subsets of the same program. The motivation behind threading is to leverage concurrency. But, the problem is that the programs that work with parallel logic are not only difficult to conceptualize but also difficult to implement consistently. Even if we devise logic to work in a parallel mode, we cannot be sure about its consistency. Concurrent execution of code raises numerous issues such as synchronization, dead-locks, atomicity, resource sharing, and so forth. This article is a slightly detoured exploration of threads in general and the Java thread model in particular.

Thread as a Process

Operating systems evolved to run more than one process at once. These processes execute in an isolated, independent manner to which resources are allocated such as memory, file handles, and security credentials. On occasion, one process communicates with another through a communication mechanism such as sockets, signals, shared memory, semaphores, and files. A thread is also very similar to a process but works on a smaller scale, sometime called a lightweight process. It makes multiple streams of program control flow to coexist within a process.

Problems with Non-sequential Execution

A normal Java code executes in a sequential manner. When we apply the mechanism of threading, we are actually applying a tweak to an otherwise straightforward sequential model of programming. This tweak bifurcates (read n-bifurcate) the code to work in a concurrent environment where each execution path is called a thread. In a multi-threading environment, it must be ensured that one thread must be executed in isolation and independent of other threads to minimize dependency, because dependency may halt the execution completely. Sometimes, two or more resources compete for a resource. This may lead to a race condition or a dead-lock. To overcome such a situation, we can grossly think of two ideas:

  • Prevent one to enter into such a situation.
  • Cure the problem only after it has occurred.

Problems with multi-programming can be quite damaging if not synchronized. An isolated execution of thread is an ideal situation, but in practice inter-communication between threads is a necessity, especially when one thread needs a resource used by another thread. In such a case, the requesting thread must be gentleman enough to wait for the release of resource used by another thread.

About Thread Safety

In a imperative programming language such as Java, it is not so easy to create a parallel logic without closely considering the issue of thread safety, because it is not an inherent principle of language design. The code must be designed in a thread-safe manner, explicitly. There are no strict rules to ensure safety of thread, but considering the following pointers may help create one.

  • Immutable objects are, by default, thread-safe because their state is un-modifiable once created.
  • Final variables in Java are thread safe.
  • Minimize sharing or objects among multiple threads.
  • Apply locking when working with shared resources.
  • Some classes that are designed keeping thread safety in mind are String, Hashtable, ConcurrentHashMap, and so on. Consult the Java API documentation for more information on such classes.
  • Static variables, unless explicitly synchronized, are a potential pothole of unsafe operation.
  • Atomic operations are thread-safe; for example, a+1 is atomic but a++ is not, because a++ means a=a+1 and consists of two operations, + (addition) and = (assignment). But, there is a way to make the increment operation atomic with the help of AtomicIntegeras follows:
    AtomicInteger aInt=new AtomicInteger(5);
    // ...
  • There are similar classes for other data types, such as AtomicBoolean, AtomicLong, AtomicLongArray, and the like.
  • Local variables are thread safe because each segment has its own copy.
  • Volatile keywords can be used to ensure that threads do not cache variables.

The problem with multi-threaded programming is that it gives the power of concurrency and ruin at the same time. If we invite the mechanism of threading, we cannot ignore its pitfalls. It is very difficult to create a deadlock in a pure functional language like Haskell. But, if we really want to extract work out of the multiple CPUs (or multi-core CPUs) in modern machines, there is no other way then to accept threading, even with its pitfalls.

Today’s software is no longer simple. The programs must at the same time be pretty, user friendly, secure, and frequently updated (read: patched, mostly due to bugs incorporated in achieving prettiness and the like) apart from serving its actual purpose. Observe the Operating Systems of today. Maximum code has been invested to enhance the look and feel of the system. To leverage performance of these so-called looks, machines need raw hardware power and, at the same time, software must be able to employ every possible mechanism to harness this power and use them effectively. If, however, we subtract these redundant codes, huge software will still be in their prime, MBs is size. Not only in size, in achieving the competitive needs of concurrency the code becoming increasingly complex, a breeding ground of bugs. Therefore, it is difficult to ignore the necessity of parallel computing, and, when it comes to leveraging the power at software end. A word of caution, though; utmost care should be taken while designing parallel logic to maintain the sanity of the application when executed. Although Java APIs have some excellent support in this regard, such as Fork/Join Framework, to allay programmers’ trouble in creating one, yet the complexity and chance of incorporating more bugs remains. The fork/join framework helps to make a multi-threaded application that scales automatically to leverage multi-core environments.

Thread in Java Memory

Let’s get a rudimentary idea of Java memory model because, to understand the Java thread model, is it necessary to understand at least a few aspects of the JVM (Java Virtual Machine) memory layout. After all, memory is the playground of threads.

JVM is a system that runs on top of the actual operating system. It has its own memory management scheme that works in sync with the underlying platform’s memory architecture.

Java memory manager specifies how a thread works with the synchronization process while accessing the shared variables. It segments the memory into two parts: a Stack area and the Heap area.

Figure 1: The Java thread manager

Each running thread creates its own stack in the stack area; this stack contains all the information specific or local to the thread, such as all declared primitive variables and method calls. This area is not sharable between threads and the stack size changes dynamically according to the running condition of the thread.

The heap area is for storing objects created by the Java application. These objects are sharable by all threads and can access the object’s method if it has a reference to it. In the case of two or more thread calls, a shared object method, or its member variables, each gets a copy of it and stores it in their local stack area.

Jakob Jenkov has written an excellent collection of article on various issues related to thread and the Java Memory Model. Refer to it for more details.

Thread Priorities

A thread can exist in more than one state, such as running or suspended temporarily in its activity, which can be resumed later on from the point where it left off. A thread is in a blocked state when it waits for a resource. A thread can be terminated at any time. Java enables us to assign priorities when working with multiple threads. Priority is specified by an integer. It basically decides when to switch from one executing thread to another; this is called context switching. The rules that decided context switching among threads are as follows:

  • A thread can voluntarily relinquish its control by explicitly yielding, sleeping, or blocking on a pending I/O operation. In such a case, a thread that has got the next highest priority get the CPU slot.
  • A higher priority thread can pre-empt any other lower priority thread and grab the CPU slot, no matter what the thread was doing at that moment. It will be halted to make way for the higher priority thread.

As should be obvious, the question of priority does not arise in the case of a single running thread. But, when there are competing threads with the same priority, the situation is a bit tricky. It actually depends on the underlying platform to decide who gets the first CPU slot. For example, in Windows the same priority threads get equal opportunity in a round-robin fashion. Other systems may enforce different rules, such as one or more equal priority thread voluntarily yields control to its peers.


Java threads enforce asynchronous behavior; as a result, they must be synchronized explicitly, especially when dealing with shared resources. For example, when we have an idea that the executing threads may compete for a single resource, we must enclose them in a synchronization block to ensure that a conflicting situation do not arise. One of the popular techniques, called the monitor, is applied. A mMonitor can be thought of as a control that allows only one thread to execute at a time. Java goes easily in this regard because each object has its own monitor. Once a thread enters a synchronization block, it is assured that no conflict will arise.

Communication Between Threads

Java threads do not depend on an underlying platform to establish communication between threads. Instead, they use an elegant mechanism via predefined methods defined by the Object, such as wait(), notify(), and notifyAll(). There methods are declared final in the Object class. As a result, all Java classes can use them; however, they can only be called from within a synchronized context. The rules are simple:

  • The method wait() tells the calling thread to relinquish the monitor and go to sleep until some other thread calls, such as notify() or notifyAll().
  • The method notify() wakes up the thread which called: wait(). And, in a similar manner, notifyAll() wakes up all the threads.


This is a gross introduction to the Java thread model; there is more to it than meets the eye as we dive deeper. Threading in Java should be applied for right reason, because they invite complexity. However, if one is sure to leverage the productivity of a multi-core system, a thread can be quite useful. Also, some logics are inherently parallel. It would be unwise not to use threading in such a case. Like everything else, it boils down to one thing: It’s a tool; use it where it is appropriate.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories