JavaData & JavaA Deeper Look: Java Thread Example

A Deeper Look: Java Thread Example

The concept of thread is intriguing as we dive deeper from different perspective of its construct apart from the gross idea of multitasking. The Java API is rich and provides many features to deal with multitasking with threads. It is a vast and complex topic. This article is an attempt to engross the reader in some concepts that would aid in better understanding Java threads, eventually leading to better programming.

A Process

A program in execution is called a process. It is an activity that contains a unique identifier called the Process ID, a set of instructions, a program counter—also called instruction pointer—handles to resources, address space, and many other things. A program counter keeps track of the current instruction in execution and automatically advances to the next instruction at the end of current instruction execution.


Multitasking is the ability of execute more than one task/process at a single instance of time. It definitely helps to have multiple CPUs to execute multiple tasks all at once. But, in a single CPU environment, multitasking is achieved with the help of context switching. Context switching is the technique where CPU time is shared across all running processes and processor allocation is switched in a time bound fashion. To schedule a process to allocate the CPU, a running process is interrupted to a halt and its state is saved. The process that has been waiting or saved earlier for its CPU turn is restored to gain its processing time from the CPU. This gives an illusion that the CPU is executing multiple tasks, while in fact a part of the instruction is executed from multiple processes in a round robin fashion. However, the fact is that true multiprocessing is never possible, even with multiple CPUs, not because of the machine limitation but because of our limitation to handle true multiple processing effects. Parallel execution of 2/200 instruction does not make a machine multiprocessor; rather, it extends or limits its capability to a cardinal precision. Exact multiprocessing is beyond humane scope and can be harnessed only by the essence of it.

Thread Overview

There is a problem with independent execution of multiple processes. Each of them carries a load of a non-sharable copy of resources. This can be easily shared across multiple running processes, yet they are not allowed to do so because processes usually do not share address spaces with another process. If they must, they can communicate only via some of the inter-process communication facilities such as sockets or pipes, and so forth. This poses several problems in process communication and resource sharing, apart from making the process what is commonly called heavy-weight.

Modern Operating Systems solved this problem by creating multiple units of execution within a process that can share and communicate across its execution unit. Each of these single units of execution is called a thread. Every process has at least one thread and can create multiple threads, only bounded by the operating system’s limit of allowed shared resources, which usually is quite large. Unlike a process, a thread has only a couple of concerns: Program Counter and a Stack.

  • Program Counter: A program counter leaps across instructions to keep track of the current execution routine.
  • Stack: A stack stores values of the local variables.

A thread within a process shares all its resources, including the address space. A thread, however, can maintain a private memory area called Thread Local Storage, which is not shared even with threads originating from the same process. The illusion of multi-threading is established with the help of context switching. Unlike context switching with the processes, context switch between threads is less expensive because thread communication and resource sharing is easier. Programs can be split into multiple threads and executed concurrently. A modern machine with a multi-core CPU further can leverage the performance with threads that may be scheduled on a different processor to improve overall performance of program execution.

Threads in Java

A thread is associated with two types of memory: main memory and working memory. Working memory is very personal to a thread and is non-sharable; main memory, on the other hand, is shared with other threads. It is through this main memory that the threads actually communicate. However, every thread also has its own stack to store local variables, like the pocket where you keep quick money to meet your immediate expenses.

Because each thread has its own working memory that includes processor cache and register values, it is up to the Java Memory Model (JMM) to maintain the accuracy of the shared values across multiple threads that may be accessed by two or more competing threads. In multi-threading, one update operation to a shared variable in the memory area can leave it in an inconsistent state unless coordinated in such a way that some other thread must get an accurate value even in some random read/write operation on the shared variable. JMM ensures reliability with various housekeeping tasks, some of them are as follows:


Atomicity guarantees that a read and write operation on any field is executed indivisibly. Now, what does that mean? According to the Java Language Specification (JLS), int, char, byte, float, short, and boolean operations are atomic but double; long operations are not atomic. Here’s an example:

long longVar=12345678L;   // not atomic

Because it is internal, it involves two separate operations: one that writes first 32 bits and the second writes last the 32 bits, to assign a 64 bit value. Now, what if we are running a 64 bit Java? The Java Language Specification (JLS) reference provides the following explanation:

“Some implementations may find it convenient to divide a single write action on a 64-bit long or double value into two write actions on adjacent 32-bit values. For efficiency’s sake, this behaviour is implementation-specific; an implementation of the Java Virtual Machine is free to perform writes to long and double values atomically or in two parts. Implementations of the Java Virtual Machine are encouraged to avoid splitting 64-bit values where possible. Programmers are encouraged to declare shared 64-bit values as volatile or synchronize their programs correctly to avoid possible complications.”

This specifically is a problem when multiple threads read or update a shared variable. One thread may update the first 32-bit value and before updating the last 32-bit, another thread may pick up the immediate value, resulting in an unreliable and inconsistent read operation. This is the problem dealing with instructions that are not atomic. However, there is a way out from long and double variables.

Declare it as volatile. Volatile variables are always written into and read from main memory. They are never cached. That is the reason it is as follows:

private volatile long longVar;

Or, synchronize getter/setter:

public synchronize void setLongVar(long val){

public synchronize long getLongVar(){
   return this.longVar;

Or, use AtomicLong from java.util.concurrent.atomic package, as shown here:

private AtomicLong longVar;

Thread Synchronization

Synchronization between thread communications is another issue that can be quite messy unless handled carefully. Java, however, provides multiple ways to establish communication between threads. Synchronization is one of the most basic mechanisms among them. It uses monitors to ensure that shared variable access is mutually exclusive. Any competing thread must go through lock/unlock procedures to get an access. On entering a synchronized block, the values of all variables in the working memory are reloaded from the main memory and writes back as soon as it leaves the block. This ensures that, once the thread is done with the variable, it leaves it in the memory so that some other thread can access it soon after the first thread is done.

There are two types of threads synchronizations built into Java:

  • Mutual exclusion: Mutual exclusion ensures that only one thread can access a critical point of code at a time.
  • Conditional synchronization: In conditional synchronization, multiple threads work together in a resource sharing scenario.

A critical section in a code is designated with reference to an object’s monitor. A thread must acquire the object’s monitor before executing the critical section of code. To achieve this, a synchronized keyword can be used in two ways:

Either declare a method as a critical section. For example,

public class CriticalSectionDemo{
   public synchronized void aCriticalMethod(){
      // ...some code

Or, create a critical section block. For example,

public class CriticalSectionDemo{
   public void aMethod(){
      // ...some code

         // ...some code
      // ...some code

JVM handles the responsibility of acquiring and releasing an object monitor’s lock. The use of a synchronized keyword simply designates a block or method to be critical. Before entering the designated block, a thread first acquires the monitor lock of the object and releases it as soon as its job is done. There is no limit on how many times a thread can acquire an object monitor’s lock, but must release it for another thread to acquire the same object’s monitor lock.


This article tried to give a perspective of what Java thread means in one of its many aspects, yet a very rudimentary explanation omitting many details. Thread in Java programming construction is very deeply associated with Java Memory Model, especially, on how its implementation is handled by JVM behind the scene. Perhaps the most valuable literature to understand the idea is to go through the Java Language Specification and Java Virtual Machine Specification. They are available in both HTML and PDF format. Interested readers may go through them to get a more elaborate idea.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories