October 30, 2014
Hot Topics:
RSS RSS feed Download our iPhone app

The Java RMI Server Framework

  • March 29, 2002
  • By Ed Harned
  • Send Email »
  • More Articles »
An RMI Server runs as a separate process in the computer. Since this process is without restriction to the temporal concurrence of the Client processes, it is asynchronous. An asynchronous process requires a degree of management necessary to allow it to execute independently, a framework.

This article will to allow you to understand why an asynchronous process needs management and the steps necessary to design a custom Asynchronous Process Manager.

First, we will set up the queuing and threading environment necessary for an Asynchronous Process Manager. Next, we will turn this single component environment into a Request Broker capable of parallel processing multiple queues.   (5,400 words)


Why use a framework?
Why are Enterprise Java Beans so successful?  Because they run inside a container -- a framework that manages persistence, messaging, thread management, logging and much more.

Why are Java GUI applets and applications so successful?  Because they run inside a container -- a framework that manages the event queue and the user interface, (the JFrame, Frame, Window, Container, and Component classes).

What does an RMI Server offer applications outside of basic communications?  Nothing!  You want it; you build it.

Backend application developers know that it is first necessary to build a framework in which the application lives. An RMI Server, (backend [activatable] remote object), runs in a plain brown box in a dark room with no one watching it. If we simply add application code to this empty box, we will encounter insurmountable problems.

Under the hood
To understand the need for a framework, it is first necessary to briefly get under the hood of an RMI Server.

  1. The RMI Runtime takes care of the two most difficult issues with networking -- the low level line protocol and the basic application level protocol, (this is the Marshalling, etc).

  2. For most implementations, the RMI Runtime creates a thread to handle each request from a Client. Once the request finishes, the thread waits for a brief period for the next Client request. In this way, the RMI-Connection may reuse threads. If no new request comes in, the RMI Runtime destroys the thread.

  3. So, where are the other features? There are none.

It is good that the RMI Runtime creates threads to handle requests. However, the limitations of the RMI Runtime environment quickly hinder our processing. Consider these two issues:

Timing:  The Client sends a request to the RMI Server for information contained in a private resource. If a horde of other users are updating the private resource, by the time the request completes the original user has gone home. If the private resource is non-functioning to the point where the request cannot complete, not only has the original user gone home, the RMI-Connection thread hangs forever.

The autonomous request, with callback:  That is -- send in a request, have it processed by a background thread that contacts the original Client when complete. If we simply create a new application thread for every request, the create/destroy overhead and the number of application threads will put a severe strain on the Server, and, the Virtual Machine will eventually run out of resources.

Pragmatic
The practical solution to these and many more problems is to separate the RMI-Connection activity from the application processing. One does this by creating an application queuing and threading structure. This is the way highly reliable, fully mission-critical software products work.

For a Client that needs a timed response, the RMI-Connection thread contacts an application thread. If the application thread does not respond within the time limit, then the RMI-Connection thread returns to the Client with a timeout message.

For an autonomous request, the RMI-Connection thread contacts an application thread and immediately returns to the Client with an "it's been scheduled" message.

Now the concern is how does one design a queuing and application threading environment so that:

  • The RMI-Connection threads and the application threads can talk to each other
  • The application environment can know about Client timeouts and recover
  • A thread overload problem does not occur, (i.e. when so many application threads are executing that the JVM cannot sustain anymore threads, or, when many application threads cause so much competition for resources that the environment effectively deadlocks).
  • The application thread create/destroy overhead does not bog down the application processing
  • The threading environment is monitored to pinpoint stalls
  • The entire threading environment may quiesce and shut down gracefully

We are going to examine an Asynchronous Process Manager that you can run. The classes for execution as well as the source code are downloadable in Resources.


Logical Processes
The operating system refers to the Java Virtual Machine as a process. The operating system refers to threads as lightweight processes. What we are going to create are logical processes.

In a managed asynchronous process environment, requests stack up in queues. Application threads fetch requests from the queues and act upon them. Developers define queues for the requests and the maximum number of application threads to service those requests. These are logical processes.

Logical Processes

Figure 1  Logical Processes

The Client sends a request to the Server. The RMI Runtime creates, or reuses, an RMI-Connection thread that executes a method within the Implementation Class. The appropriate method places the request into a prioritized wait list within a queue and uses an Object-notify method to wake up an application thread to service the request.

The timed request:  The RMI-Connection thread uses the Object-wait method to suspend its execution until either the application thread finishes or the time limit expires. Upon completion, the RMI-Connection thread returns to the Client with the response from the application or a time out message.

The autonomous request:  The RMI-Connection thread returns to the Client with an "it was scheduled" message. All the application processing happens asynchronously.

What do we need for both these scenarios?

  • An interface that extends java.rmi.Remote with at least three methods:
    • A syncRequest() method that handles the timed request
    • An asyncRequest() method that handles the autonomous request
    • A shutDown() method so that a Client may gracefully shut down the RMI Server
  • A concrete implementation class that carries out the interface's declaration
  • A parameter class that passes Client information to the RMI Server
  • A start up class that creates the RMI Server environment
  • A queue class that defines the priority wait lists, the application threads and a reference to the application processing class
  • A thread class that fetches requests from the wait lists and calls the application processing class
  • An application processing class to execute the application logic
  • Support classes

;  The interface is straightforward and simple:

public interface FrameworkInterface
      extends Remote {
    public Object[] syncRequest(FrameWorkParm in)
	throws RemoteException;
public Object[] asyncRequest(FrameWorkParm in)
    throws RemoteException;
    public String shutRequest()
 	throws RemoteException;

;  The concrete implementation class contains the logic for placing the request in a request queue, waking up an application thread and returning the reply to the Client.

;  The parameter class is very simple. The instance fields are:

  • Object input -- Is an optional Object from the Client to the application processing class.
  • String func_name -- (Function), the name of the logical process.
  • int wait_time -- When using a syncRequest(), the maximum number of seconds the RMI-Connection thread should wait for the application thread to finish, (timeout interval).
  • int priority -- The priority of the request. A priority 1 should select before a priority 2, etc.
public final class FrameWorkParm 
     implements java.io.Serializable  {
  
  private Object input;       // input data
  private String func_name;   // Function name
  private int    wait_time;   // maximum time to wait
  private int    priority;    // priority of the request

;  The start up class contains logic for establishing the persistent environment and exporting the remote object.

;  Application Queues contain three elements:

  1. Prioritized wait lists where request are pending.
    Requests stack up in wait lists when no threads are immediately available to act upon them. When threads finish processing a request, they look in the wait lists for the next request. This reduces machine overhead by letting each thread complete multiple requests between a start/stop processing sequence.
  2. Anchor points for application threads.
    By defining the total number of threads in each queue and only instantiating a thread when it is actually necessary, we limit contention among threads and curtail the thread overload problem.
  3. The reference to the application processing class.

;  The thread class is straightforward. It has a run method where it waits for work. It checks the queue's wait lists for new requests. When it finds a request, it calls a method on the user-defined application processing class to perform the application's work. It sends the return object from the application processing class back to the RMI-Connection thread. This 'fetch-next-request / call-work-class' loop continues until there are no more pending requests.

Most developers see threading as part of the application class. This is where the application class extends java.lang.Thread or implements java.lang.Runnable. Handling the thread logic in addition to the application logic requires two different thought patterns to merge as one. This framework design separates the thread logic from the application logic so that any application processing class may easily plug-in to a thread structure.

;  A separate application processing class contains the application logic. The thread class calls the application processing class in the appropriate way for your framework. The calling may be with reflection or any other method. For this example, we use an Interface with one method. Any class that implements this Interface is acceptable.

public interface DemoCall {

    public Object doWork(Object in, FrameWorkInterface fw)
    	throws java.lang.Throwable; 

The start up class gets a new instance of the application processing class, (that implements DemoCall):  DemoCall to_call = new YourAppl();
The thread class then calls the application processing class, to_call.doWork();

Looking at the parameter to the application doWork() method we see that it contains
1) the reference to the Object from the Client and
2) a reference to the Server itself. The second reference is so the application may call the Server, as a Client. This is recursion; one of the most useful techniques in programming and sometimes the most difficult to implement.

;  We expound on the support classes as necessary.

The underlying principle behind this framework is the concept of "common memory".

Common Memory
In order for any two threads to talk to each other, they must use memory that is common between them. No one thread owns this memory. It is updateable and viewable by all threads.

An RMI Server is persistent. Any objects the Server creates with a live reference remains for the life of the Server. When the Server starts, it gets a new instance of a "common memory" class and assigns a private, static field with the reference. Since the Server remains and the reference is live, the object never garbage collects. Other objects the Server creates also get a reference to the "common memory" class, which increases the number of live references. This includes the RMI Implementation class. In this way, all threads running on the Server -- the RMI-Connection threads and the application threads, have access to this "common memory."

Common Memory

Figure 2  Common Memory

(There are many other ways of getting access to a common class. Two other ways are: 1) The Singleton, with its getInstance() method. 2) Using a class with static fields and each thread gets a new instance of the class, (Java copies the static class fields to the new instance.))

The key to modifying variables for use in multiple threads is the synchronized statement or synchronized method modifier. In Java, there is shared main memory and thread memory. Every thread gets a copy of the variables it needs from shared main memory and saves those variables back into shared main memory for use by other threads. See Chapter 17 of the Java Specification.

Putting the access/mutate to an object's variables within a synchronized block or method accomplishes three functions:

  1. Java locks out other threads, (synchronizing on the same object), from executing.
  2. Java reads the current value of a variable from shared main memory to thread storage when the thread accesses that variable.
  3. Java writes the new value of a variable from thread storage to shared main memory at the end of the block or method when the thread alters that variable.

Therefore, to guarantee integrity and to make sure all threads have access to the latest value of a variable, synchronize. For the ultimate word on memory access, see Can double-checked locking be fixed? by Brian Goetz. Also, for an in-depth article on multiple CPU thread synchronization, see Warning! Threading in a multiprocessor world, by Allen Holub.

Wake up call
How can an RMI-Connection thread find an application queue and wake up an application thread? Then, how can that application thread wake up the RMI-Connection thread? The answers lie in the structure of the environment.

The FrameWorkBase class contains static references to other classes. For this example, the fields are public. (This is simply one way to establish common memory. As above, there are many more.) Some of those fields are:

public final class FrameWorkBase {

   // *--- All are class fields ---*
  
  // Main Array of Queues
  public static FrameWorkMain main_tbl = null;
  
  // Function Array
  public static FuncHeader func_tbl = null;
  
  // async Array
  public static AsyncHeader async_tbl = null;
    
  // sync Array
  public static SyncHeader sync_tbl = null;
  
  // Remote Object myself
  public static FrameWorkInterface Ti = null;

The persistent RMI Server instantiates the base of common-memory, FrameWorkBase and assigns a class field to the reference of the FrameWorkBase object. Since the reference is live and the Server is persistent, Java does not garbage collect the object. Think of the start up class fields as anchor points.

public final class FrameWorkServer {
    
    // The base for all persistent processing
    private static FrameWorkBase T = null;
    
    /**
     * main entry point - starts the application
     * @param args java.lang.String[]
     */
     public static void main(java.lang.String[] args){

        // the base for all processing
        T = new FrameWorkBase();

	// now, after initializing the other FrameWorkBase fields
	// including the application queues and threads,
	//   do the Implementation class

	 // the Implementation class with a ref to FrameWorkBase
    	 FrameWorkImpl fwi = new FrameWorkImpl(T);

Since the start up class passes the FrameWorkBase reference to the constructor of the Implementation class and the Implementation class saves the FrameWorkBase reference in its instance fields, all RMI-Connection threads have access to the FrameWorkBase class.

public final class FrameWorkImpl 
        extends UnicastRemoteObject
        implements FrameWorkInterface {

 	// instance field (base of common memory)
	private FrameWorkBase Ty;
    // constructor
    public FrameWorkImpl (FrameWorkBase T)
    	throws RemoteException {

    	// set common memory reference
    	Ty = T;

When the Client invokes a remote method
The RMI-Connection thread, having a reference to the FrameWorkBase class as an instance field, searches the Function Array, (described later), for a match on the passed Function Name.

The Function entry contains a reference, (described later), for the desired Queue within the Main Array of Queues.

The RMI-Connection thread places the "enhanced request", (described later), into that Queue's wait list, finds a waiting thread and wakes up the thread. Since the Queue's instance fields contain references to all instantiated threads, the RMI-Connection thread may use notify().

This requires a little explaining since most books on threads say to use notifyAll(). The notify() and notifyAll() methods are methods of class Object. The question is -- for a particular Object, how many threads are there? In the case of the Queue, there is only one thread per instance.
    QueueThread qt1 = new QueueThread();
    QueueThread qt2 = new QueueThread();
The fields, qt1 and qt2 are references, therefore, qt1.notify(), wakes up an arbitrary thread. Since there is only one thread for each QueueThread Object, the notify() works.

By having queues with solid references to each thread, it is very simple to keep statistics. Statistics form the basis for tuning.

The RMI-Connection thread then waits for the application to finish by issuing an Object-wait (timeout value).

The other wake up call
When the application thread finishes processing, it has to find the calling RMI-Connection thread to wake it up. Finding the RMI-Connection thread is not so easy since the RMI Runtime spawns threads without a reference to the Object,
    (i.e., new RMI-Connection-Thread(), instead of with a reference, conn_obj = new etc.).
So, how does the application thread know who called it? It doesn't. This is why it must use notifyAll() and why a little more work is necessary.

The RMI-Connection thread must pass something to the application thread to uniquely identify itself. Any object the RMI-Connection thread creates and passes to another thread by reference is available to the other thread. After a synchronization event, each thread then has access to the current value of that object. For this example, the framework uses an integer array.

As above, the RMI-Connection thread places an "enhanced request" in the Queue's wait list. This is the Client's Object from the parameter, FrameWorkParm, and several other fields.

// requestor obj to cancel wait
Object requestor;
        
// Created by RMI-Connection thread
int[] pnp; 
// passed back object from the appl thread
Object back;

The RMI-Connection thread creates a new integer array,
assigns the first integer to 0 and
assigns the Object requestor =  this;.

// posted/not posted indicator: 0=not, 1=yes
// This is for the appl thread to post.
// Java passes arrays by reference, so the
// appl thread may have access to this object.
pnp = new int[1];
pnp[0] = 0;
// the reference to this object's monitor
requestor = this;

Thus, having passed the application thread enough information to inform it how to indicate request completion, (by assigning pnp[0] = 1), and a reference for the notifyAll() method, (this), the RMI-Connection thread may now wait.

Note below: The spin lock on (pnp[0] == 0) is because when any application thread issues a notifyAll(), Java wakes up all the RMI-Connection threads.

// Wait for the request to complete, or, time out

// get the monitor for this RMI object
synchronized (this) {
                
    // until work finished
    while  (pnp[0] == 0) {

        // wait for a post or timeout
        try {
            // max wait time is the time passed
            wait(time_wait);

        } catch (InterruptedException e) {}

        // When not posted
        if  (pnp[0] == 0) {

            // current time
            time_now = System.currentTimeMillis();

            // decrement wait time
            time_wait -= (time_now - start_time);

            // When no more seconds remain
            if  (time_wait < 1) {

                // get out of the loop, timed out
                break;
            }
            else {
                // new start time
                start_time = time_now;
            }
        }
    }
}

When the wait completes, the RMI-Connection thread picks up the Object returned from the application and passes the Object back to the Client.

On the other side when the processing completes, the application thread does the following:

// get lock on RMI obj monitor
synchronized (requestor) {
    // the object from the application
    back = data_object;
    // set posted
    pnp[0] = 1;

    // wake up
    requestor.notifyAll();
}




Page 1 of 2



Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 


Sitemap | Contact Us

Rocket Fuel