JavaEJBThe Java RMI Server Framework

The Java RMI Server Framework content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

An RMI Server runs as a separate process in the computer. Since this process is
without restriction to the temporal concurrence of the Client processes, it is
asynchronous. An asynchronous process requires a degree of management necessary to allow
it to execute independently, a framework.

This article will to allow you to understand why an asynchronous process needs
management and the steps necessary to design a custom Asynchronous Process Manager.

First, we will set up the queuing and threading environment necessary for an
Asynchronous Process Manager. Next, we will turn this single component environment into a
Request Broker capable of parallel processing multiple queues.   (5,400 words)

Why use a framework?
Why are Enterprise Java Beans so successful?  Because
they run inside a container — a framework that manages persistence, messaging, thread
management, logging and much more.

Why are Java GUI applets and applications
so successful?  Because they run inside a container — a framework that manages the
event queue and the user interface, (the JFrame, Frame, Window, Container, and Component

What does an RMI Server offer applications
outside of basic communications?  Nothing!  You want it; you build it.

Backend application developers know that it is first necessary to build a framework in
which the application lives. An RMI Server, (backend [activatable] remote object),
runs in a plain brown box in a dark room with no one watching it. If we simply add
application code to this empty box, we will encounter insurmountable problems.

Under the hood
To understand the need for a framework, it is first necessary to briefly get
under the hood of an RMI Server.

  1. The RMI Runtime takes care of the two most difficult
    issues with networking — the low level line protocol and the basic application level
    protocol, (this is the Marshalling, etc).

  2. For most implementations, the RMI Runtime creates a
    thread to handle each request from a Client. Once the request finishes, the thread waits
    for a brief period for the next Client request. In this way, the RMI-Connection may reuse
    threads. If no new request comes in, the RMI Runtime destroys the thread.

  3. So, where are the other features? There are none.

It is good that the RMI Runtime creates threads to handle requests. However, the
limitations of the RMI Runtime environment quickly hinder our processing. Consider these
two issues:

Timing:  The Client sends a request to the RMI Server
for information contained in a private resource. If a horde of other users are updating
the private resource, by the time the request completes the original user has gone home.
If the private resource is non-functioning to the point where the request cannot complete,
not only has the original user gone home, the RMI-Connection thread hangs forever.

The autonomous request, with callback:  That is — send in a
request, have it processed by a background thread that contacts the original Client when
complete. If we simply create a new application thread for every request, the
create/destroy overhead and the number of application threads will put a severe strain on
the Server, and, the Virtual Machine will eventually run out of resources.

The practical solution to these and many more problems is to separate the RMI-Connection
activity from the application processing. One does this by creating an application queuing
and threading structure. This is the way highly reliable, fully mission-critical software
products work.

For a Client that needs a timed response, the RMI-Connection thread contacts an
application thread. If the application thread does not respond within the time limit, then
the RMI-Connection thread returns to the Client with a timeout message.

For an autonomous request, the RMI-Connection thread contacts an application thread and
immediately returns to the Client with an "it’s been scheduled" message.

Now the concern is how does one design a queuing and application threading environment
so that:

  • The RMI-Connection threads and the application threads can talk to each other
  • The application environment can know about Client timeouts and recover
  • A thread overload problem does not occur, (i.e. when so many application threads are
    executing that the JVM cannot sustain anymore threads, or, when many application threads
    cause so much competition for resources that the environment effectively deadlocks).
  • The application thread create/destroy overhead does not bog down the application
  • The threading environment is monitored to pinpoint stalls
  • The entire threading environment may quiesce and shut down gracefully

We are going to examine an Asynchronous Process Manager that you can run. The classes
for execution as well as the source code are downloadable in Resources.

Logical Processes
The operating system refers to the Java Virtual Machine as
a process. The operating system refers to threads as lightweight processes. What we are
going to create are logical processes.

In a managed asynchronous process environment, requests stack up in queues. Application
threads fetch requests from the queues and act upon them. Developers define queues for the
requests and the maximum number of application threads to service those requests. These
are logical processes.

Logical Processes

Figure 1  Logical Processes

The Client sends a request to the Server. The RMI Runtime creates, or reuses, an
RMI-Connection thread that executes a method within the Implementation Class. The
appropriate method places the request into a prioritized wait list within a queue and uses
an Object-notify method to wake up an application thread to service the request.

The timed request:  The RMI-Connection thread uses the
Object-wait method to suspend its execution until either the application thread finishes
or the time limit expires. Upon completion, the RMI-Connection thread returns to the
Client with the response from the application or a time out message.

The autonomous request:  The RMI-Connection thread returns to the
Client with an "it was scheduled" message. All the application processing
happens asynchronously.

What do we need for both these scenarios?

  • An interface that extends java.rmi.Remote with at least three methods:
    • A syncRequest() method that handles the timed request
    • An asyncRequest() method that handles the autonomous request
    • A shutDown() method so that a Client may gracefully shut down the RMI Server
  • A concrete implementation class that carries out the interface’s declaration
  • A parameter class that passes Client information to the RMI Server
  • A start up class that creates the RMI Server environment
  • A queue class that defines the priority wait lists, the application threads and a
    reference to the application processing class
  • A thread class that fetches requests from the wait lists and calls the application
    processing class
  • An application processing class to execute the application logic
  • Support classes

;  The interface is straightforward and

public interface FrameworkInterface
      extends Remote {
    public Object[] syncRequest(FrameWorkParm in)
	throws RemoteException;
public Object[] asyncRequest(FrameWorkParm in)
    throws RemoteException;
    public String shutRequest()
 	throws RemoteException;

;  The concrete implementation class
contains the logic for placing the request in a request queue, waking up an application
thread and returning the reply to the Client.

;  The parameter class
is very simple. The instance fields are:

  • Object input — Is an optional Object from the Client to the application processing
  • String func_name — (Function), the name of the logical process.
  • int wait_time — When using a syncRequest(), the maximum number of seconds the
    RMI-Connection thread should wait for the application thread to finish, (timeout
  • int priority — The priority of the request. A priority 1 should select before a
    priority 2, etc.
public final class FrameWorkParm 
     implements  {
  private Object input;       // input data
  private String func_name;   // Function name
  private int    wait_time;   // maximum time to wait
  private int    priority;    // priority of the request

;  The start up class contains logic for
establishing the persistent environment and exporting the remote object.

;  Application Queues contain three

  1. Prioritized wait lists where request are pending.
    Requests stack up in wait lists when no threads are immediately available to act upon
    them. When threads finish processing a request, they look in the wait lists for the next
    request. This reduces machine overhead by letting each thread complete multiple requests
    between a start/stop processing sequence.
  2. Anchor points for application threads.
    By defining the total number of threads in each queue and only instantiating a thread when
    it is actually necessary, we limit contention among threads and curtail the thread
    overload problem.
  3. The reference to the application processing class.

;  The thread class is straightforward. It
has a run method where it waits for work. It checks the queue’s wait lists for new
requests. When it finds a request, it calls a method on the user-defined application
processing class to perform the application’s work. It sends the return object from the
application processing class back to the RMI-Connection thread. This ‘fetch-next-request /
call-work-class’ loop continues until there are no more pending requests.

Most developers see threading as part of the application class. This is where the
application class extends java.lang.Thread or implements java.lang.Runnable. Handling the
thread logic in addition to the application logic requires two different thought patterns
to merge as one. This framework design separates the thread logic from the application
logic so that any application processing class may easily plug-in to a thread structure.

;  A separate application processing class
contains the application logic. The thread class calls the application processing class in
the appropriate way for your framework. The calling may be with reflection or any other
method. For this example, we use an Interface with one method. Any class that implements
this Interface is acceptable.

public interface DemoCall {

    public Object doWork(Object in, FrameWorkInterface fw)
    	throws java.lang.Throwable; 

The start up class gets a new instance of the application processing class, (that
implements DemoCall):  DemoCall to_call = new YourAppl();
The thread class then calls the application processing class, to_call.doWork();

Looking at the parameter to the application doWork() method we see that it contains
1) the reference to the Object from the Client and
2) a reference to the Server itself. The second reference is so the application may call
the Server, as a Client. This is recursion; one of the most useful techniques in
programming and sometimes the most difficult to implement.

;  We expound on the support classes as

The underlying principle behind this framework is the concept of "common

Common Memory
In order for any two threads to talk to each other, they must use memory that is
common between them. No one thread owns this memory. It is updateable and viewable by all

An RMI Server is persistent. Any objects the Server creates with a live reference
remains for the life of the Server. When the Server starts, it gets a new instance of a
"common memory" class and assigns a private, static field with the reference.
Since the Server remains and the reference is live, the object never garbage collects.
Other objects the Server creates also get a reference to the "common memory"
class, which increases the number of live references. This includes the RMI Implementation
class. In this way, all threads running on the Server — the RMI-Connection threads and
the application threads, have access to this "common memory."

Common Memory

Figure 2  Common Memory

(There are many other ways of getting access to a common class. Two other ways
are: 1) The Singleton, with its getInstance()
method. 2) Using a class with static fields and each thread gets a new instance
of the class, (Java copies the static class
fields to the new instance.))

The key to modifying variables for use in multiple threads is the synchronized
statement or synchronized method modifier. In Java, there
is shared main memory and thread memory. Every thread gets a copy of the variables it
needs from shared main memory and saves those variables back into shared main memory for
use by other threads. See Chapter 17 of the Java Specification.

Putting the access/mutate to an object’s variables within a synchronized block or
method accomplishes three functions:

  1. Java locks out other threads, (synchronizing on the same
    object), from executing.
  2. Java reads the current value of a variable from shared
    main memory to thread storage when the thread accesses that variable.
  3. Java writes the new value of a variable from thread
    storage to shared main memory at the end of the block or method when the thread alters
    that variable.

Therefore, to guarantee integrity and to make sure all threads have access to the
latest value of a variable, synchronize. For the ultimate word on memory
access, see Can double-checked locking be fixed? by Brian Goetz.
Also, for an in-depth article on multiple CPU thread synchronization, see Warning! Threading in a multiprocessor world, by Allen

Wake up call
How can an RMI-Connection thread find an application queue and wake up an application
thread? Then, how can that application thread wake up the RMI-Connection thread? The
answers lie in the structure of the environment.

The FrameWorkBase class contains static references to other classes.
For this example, the fields are public. (This is simply one way to establish common
memory. As above, there are many more.) Some of those fields are:

public final class FrameWorkBase {

   // *--- All are class fields ---*
  // Main Array of Queues
  public static FrameWorkMain main_tbl = null;
  // Function Array
  public static FuncHeader func_tbl = null;
  // async Array
  public static AsyncHeader async_tbl = null;
  // sync Array
  public static SyncHeader sync_tbl = null;
  // Remote Object myself
  public static FrameWorkInterface Ti = null;

The persistent RMI Server instantiates the base of common-memory, FrameWorkBase
and assigns a class field to the reference of the FrameWorkBase object. Since the
reference is live and the Server is persistent, Java does
not garbage collect the object. Think of the start up class fields as anchor points.

public final class FrameWorkServer {
    // The base for all persistent processing
    private static FrameWorkBase T = null;
     * main entry point - starts the application
     * @param args java.lang.String[]
     public static void main(java.lang.String[] args){

        // the base for all processing
        T = new FrameWorkBase();

	// now, after initializing the other FrameWorkBase fields
	// including the application queues and threads,
	//   do the Implementation class

	 // the Implementation class with a ref to FrameWorkBase
    	 FrameWorkImpl fwi = new FrameWorkImpl(T);

Since the start up class passes the FrameWorkBase reference to the constructor
of the Implementation class and the Implementation class saves the FrameWorkBase
reference in its instance fields, all RMI-Connection threads have access to the FrameWorkBase

public final class FrameWorkImpl 
        extends UnicastRemoteObject
        implements FrameWorkInterface {

 	// instance field (base of common memory)
	private FrameWorkBase Ty;
    // constructor
    public FrameWorkImpl (FrameWorkBase T)
    	throws RemoteException {

    	// set common memory reference
    	Ty = T;

When the Client invokes a remote method
The RMI-Connection thread, having a reference to the FrameWorkBase class as an
instance field, searches the Function Array, (described later), for a match on the passed
Function Name.

The Function entry contains a reference, (described later), for the desired Queue
within the Main Array of Queues.

The RMI-Connection thread places the "enhanced request", (described later),
into that Queue’s wait list, finds a waiting thread and wakes up the thread. Since the
Queue’s instance fields contain references to all instantiated threads, the RMI-Connection
thread may use notify().

This requires a little explaining since most books on threads say to use notifyAll().
The notify() and notifyAll() methods are methods of class Object. The question is — for a
particular Object, how many threads are there? In the case of the Queue, there is only one
thread per instance.
    QueueThread qt1 = new QueueThread();
    QueueThread qt2 = new QueueThread();
The fields, qt1 and qt2 are references, therefore, qt1.notify(), wakes up an arbitrary
thread. Since there is only one thread for each QueueThread Object, the notify()

By having queues with solid references to each thread, it is very simple to keep
statistics. Statistics form the basis for tuning.

The RMI-Connection thread then waits for the application to finish by issuing an
Object-wait (timeout value).

The other wake up call
When the application thread finishes processing, it has to find the calling
RMI-Connection thread to wake it up. Finding the RMI-Connection thread is not so easy
since the RMI Runtime spawns threads without a reference to the Object,
    (i.e., new RMI-Connection-Thread(), instead of with a reference,
conn_obj = new etc.).
So, how does the application thread know who called it? It doesn’t. This is why it must
use notifyAll() and why a little more work is necessary.

The RMI-Connection thread must pass something to the application thread to uniquely
identify itself. Any object the RMI-Connection thread creates and passes to another thread
by reference is available to the other thread. After a synchronization event, each thread
then has access to the current value of that object. For this example, the framework uses
an integer array.

As above, the RMI-Connection thread places an "enhanced request" in the
Queue’s wait list. This is the Client’s Object from the parameter, FrameWorkParm,
and several other fields.

// requestor obj to cancel wait
Object requestor;
// Created by RMI-Connection thread
int[] pnp; 
// passed back object from the appl thread
Object back;

The RMI-Connection thread creates a new integer array,
assigns the first integer to 0 and
assigns the Object requestor =  this;.

// posted/not posted indicator: 0=not, 1=yes
// This is for the appl thread to post.
// Java passes arrays by reference, so the
// appl thread may have access to this object.
pnp = new int[1];
pnp[0] = 0;
// the reference to this object's monitor
requestor = this;

Thus, having passed the application thread enough information to inform it how to
indicate request completion, (by assigning pnp[0] = 1), and a reference for the
notifyAll() method, (this), the RMI-Connection thread may now wait.

Note below: The spin lock on (pnp[0] == 0) is because when any
application thread issues a notifyAll(), Java wakes up all
the RMI-Connection threads.

// Wait for the request to complete, or, time out

// get the monitor for this RMI object
synchronized (this) {
    // until work finished
    while  (pnp[0] == 0) {

        // wait for a post or timeout
        try {
            // max wait time is the time passed

        } catch (InterruptedException e) {}

        // When not posted
        if  (pnp[0] == 0) {

            // current time
            time_now = System.currentTimeMillis();

            // decrement wait time
            time_wait -= (time_now - start_time);

            // When no more seconds remain
            if  (time_wait < 1) {

                // get out of the loop, timed out
            else {
                // new start time
                start_time = time_now;

When the wait completes, the RMI-Connection thread picks up the Object returned from
the application and passes the Object back to the Client.

On the other side when the processing completes, the application thread does the

// get lock on RMI obj monitor
synchronized (requestor) {
    // the object from the application
    back = data_object;
    // set posted
    pnp[0] = 1;

    // wake up

Autonomous request
The autonomous request does not require the RMI-Connection thread to wait for completion.
Therefore, the autonomous request may seem very simple, but there is one catch. What
happens to the return data from the doWork() method of the
application processing class? It should not be the concern of an application where its
return data goes. A developer must be able to use the same application for a synchronous
or asynchronous request. Therefore, we need an Agent for the autonomous request.

As part of the class definition of a Function is a field for the optional
Agent Queue. The application processing class may return an Object to the application
thread. For an autonomous request, when desirable, the application thread may activate a
logical process by creating a new "enhanced request", (with the just-returned
Object), placing that "enhanced request" into the Agent queue’s wait list and
waking up an Agent thread. This is very similar to a standard autonomous request that
comes from the Client. The Agent logical process completes asynchronously, without any
return data. The Agent queue application processing class is where you may place the
call-back, call-forward or any other logic necessary for dealing with a request
completion. [This is a little confusing, but hang in there, eventually it becomes clear.]

Shut down
In order to gracefully shut down the RMI Server, (rather than using something like a kill
-9 <pid>), every implementation should contain a shut down method. However, if the
shut down method simply ends the Java Virtual Machine, the
method’s return message never makes it back to the Client. The better way is to start a
shut down thread. The shut down thread sleeps about two seconds, (to give the method’s
return message a chance to clear the virtual machine), and then the shut down thread
issues System.exit(0).

That wasn’t so bad. Now it is time to ask the three major questions necessary for every

  1. What good does it do?
  2. Does it achieve what it promises?
  3. Is there anything more to it than its pompous invocation of time immemorial?

Both answers to questions one and two are affirmative.

  1. This framework gives us the ability:
  • to time RMI Server requests
  • to run autonomous RMI Server requests
  • to limit the create/destroy overhead common with application threads
  • to curtail the thread overload problem
  • to easily debug requests, especially autonomous requests. (We know the application
    thread and the application class where the work resides. There is no need to trap
    extraneous code.)

This framework separates the RMI threading environment from the application threading
environment.  Additionally, it separates the application thread logic from the actual
application logic. This is the greater degree of abstraction so important in
Object Oriented Design.

  1. As you will see from the zip file code, this framework performs superbly.

The answer to the third question is: it’s nice, but … The return on investment may
not be worth the effort involved. An additional, critical part of the structure is error
recovery. Sometimes the anomaly code far outweighs the standard code. What this framework
needs is a bigger reason for living.

What if we could expand this simple framework to support multiple queues per request?
That is — when a Client request involves multiple accesses to resources, if we could
split the request and place each component into its own queue, then we could parallel
process the request. Now, the possibilities are endless. This is request brokering and it
is the subject of the next section.

The Request Broker
Having set up a basic queuing environment it soon becomes evident that some requests
really contain multiple actions, or components. For instance — a request may require
accesses to two different databases. We could access each database in a linear fashion but
the second access must wait for the first to complete.

A better way to handle a multi-action request is to separate the request into its
component parts and place each component into a separate queue. This is parallel
processing. It is more difficult than linear programming but the benefits far outweigh the
extra work up front.

What do we need to support request brokering? We need to understand that the Client
request is no longer the exclusive concern of a single logical process. Therefore, we must
put the Client’s request into a common area so that any number of logical processes may
access it. Since we already have a "common memory" environment, we must now
enhance it.

We need a common place to put the Object from the Client, (this is the input data
Object within the FrameWorkParm class). A simple array of Objects is
all that is necessary. For the example, the basic class is ObjDetail. The ObjDetail
class contains two fields, the Object from the Client and a status indicator.

public final class ObjDetail {

   private Object obj;     // object from Client
   private int    status;  // 0 = available, 1 = busy

The array in which the ObjDetail Object resides is a linked-list. (All the
arrays in this framework are linked-lists.) Access to entries within the linked-list is
directly, by subscript. When an RMI-Connection thread puts an Object into the list, all
that the RMI-Connection thread must pass to an application thread is the primitive

We need a common place to hold the request from the Client both for a synchronous and
for an asynchronous request. Once again, simple arrays of Objects are all that is
necessary. Both basic Objects must keep the subscript to the Object from the Client,
(above), and an integer array of subscripts to the Objects from the applications’ return
data. The classes for this example are SyncDetail and AsyncDetail.

public final class SyncDetail {

    private int[] output;        // output data array pointers
    private int   input;         // input area pointer, if any
    private int   status;        // 0=avail 1=busy
    private int   nbr_que;       // total queue's in function
    private int   nbr_remaining; // remaining to be processed
    private int   next_output;   // next output in list   
    private int   wait_time;     // max wait time in seconds 
    private Object requestor;    // requestor obj to cancel wait
    private int[] pnp;           // 0 not posted, 1 is posted
public final class AsyncDetail {

    private int   input;         // pointer to input
    private int   out_agent;     // pointer to agent name
    private int   function;      // pointer to function name
    private int   nbr_que;       // nbr of queues in function
    private int   nbr_remaining; // remaining unprocessed
    private int   status;        // 0 = available, 1 = busy
    private int   next_output;   // next output subscript
    private int[] que_names;     // subscripts of all the queues
    private int[] output;        // output array

Another common place to hold information for an asynchronous request is an array of
those requests that have stalled. When the synchronous request takes longer than the user
can wait, the connection side of the request terminates. When an asynchronous request
takes longer than is prudent, the processing may be unable to complete and the request
stalls. There must be a place to put the information and a procedure for recovering from
the stall. The place is the StallDetail class. The thread that places the
information there is the Monitor, (we’ll get to this shortly). The procedure is the
exclusive needs of the user.

public final class StallDetail {

    private long  entered;       // time entered
    private long  at_name;       // Async Array generated name
    private int   gen_name;      // Async Array pointer
    private int   status;        // 0 = available, 1 = busy
    private int   times_checked; // times checked
    private int   failed_reason;  // why it is here

The request Function
How can a Server know the components of a request? The component structure is information
the developer knows from the beginning. In the basic framework, there is a single queue
for each Function. In the request broker framework, there is a list of queues for each
Function. The list of queues associated with each Function is the component
structure. The class is the FuncDetail array.

public final class FuncDetail {

   private String name;     // Function name
   private long   used;     // times used
   private int    agent;    // optional agent queue subscript
   private int    nbr_que;  // number of queues in this entry
   private int[]  qtbl;     // array of queues


Memory Referencing

Figure 3  Common Memory Referencing

The framework:

  • Places the passed Object, (input data), from the Client parameter, (FrameWorkParm),
    into the ObjDetail array
  • Creates a [A]SyncDetail Object with the list of queues and the subscript to the
    input data
  • Places the integer subscript for the [A]SyncDetail Object
    into the wait list of each queue in the list according to its priority. If the wait list
    for that priority is full, the subscript goes into the next higher wait list and sets an
    overflow- indicator.
  • Wakes up a thread on each queue in the list

For the synchronous request, the framework waits until all queues finish processing.
The framework then concatenates the return objects from all the logical processes into a
single Object array and returns the Object array to the Client.

For the autonomous request, the framework returns to the Client with an "its been
scheduled" message. The processing takes place asynchronously. When the last queue’s
application finishes processing, the framework optionally concatenates the return
objects from all the logical processes into a single Object array and activates a new
logical process, the Agent.

The Monitor
An additional requirement for any asynchronous process is a way to monitor the logical
processes. The autonomous requests execute without any task waiting for their completion
and when they stall, detecting that stall is difficult. One way to monitor the environment
is with a daemon thread that scans the environment periodically. Daemon simply means that
it is not part of a particular application or RMI-Connection. When the monitor thread
finds a problem, it may log the problem, send a message to a middleware message queue or
internally notify another remote object. The action depends on the application. A common
function is to place the details of the autonomous request into a stalled array, (StallDetail).
What to do then is also application dependent.

Now that we’ve talked about threads and queues we’re sure it is a little
confusing. It is time to put it all together with a demonstration. If you haven’t downloaded the zip file yet, then do so now.

This demonstration requires minimally the Java1.1
platform. Unzip the file into a directory. The structure is as follows:

/Doc – Contains a single file, Doc.html, which documents all the classes and the
runtime procedure.
/Source – Contains all the source code for this article
/Classes – Contains all the class files for this article including a policy.all file for

Open the /Doc/Doc.html file for directions.

Follow the directions for starting the RMI Registry and the FrameWorkServer, (section, Runtime).

Follow the directions for starting a single-access Client, DemoClient_3, who’s
Function is F3, which comprises three queues, (this is the section, the first time).

This is what took place. The Client invoked the syncRequest() method on the FrameWorkServer
remote object passing a FrameWorkParm Object. The syncRequest():

  • Found that the requested function, F3, contained three queues, Q1, Q2 and Q3
  • Saved the Client’s passed input data Object in the ObjDetail array
  • Saved the "enhanced request" in the SyncDetail array and placed the subscript
    to it into wait lists in Q1, Q2 and Q3
  • Found that Q1 had no threads alive, so it instantiated a new thread*
  • Found that Q2 had no threads alive, so it instantiated a new thread*
  • Found that Q3 had no threads alive, so it instantiated a new thread*
  •     * [Had there been a thread alive, the syncRequest() would
    only have had to notify() it.]
  • Waited for the application to finish processing
  • When notified of the completion, it picked up the return objects from each logical
    process, concatenated each object into an Object array and returned the Object array to
    the Client.

While the syncRequest() was waiting, each application thread:

  • Searched the wait lists for the first available request
  • Picked up the "enhanced request" from the SyncDetail array
  • Picked up the Client’s passed Object from the ObjDetail array
  • Called the appropriate application processing class to actually do the work for the
  • Saved the return Object from the application processing class in the "enhanced
  • (When it determined all other Queue’s finished processing), it ‘woke up’ the waiting
    RMI-Connection thread
  • Searched the wait lists for the first available request and since none was found,
  • Issued a wait() until the next notify().

Load it up
The excitement comes when many Clients hit on the Server simultaneously. Additionally,
without a visualization tool you would have no way of knowing what is going on. Within
this package, there are two classes that do just that, (this is the section, load it

Follow the directions for running the visualization tool, FrameWorkThreads.

Follow the directions for running the multiple Client threads class, DemoClientMultiBegin,
to put a load on the system.

After you are done with the Server, you may shut it down gracefully with a Client
request, DemoClient_Shutdown.

In this brief article, we can only examine the skeleton of an Asynchronous
Process Manager. Some supplemental elements are:

Error recovery: As above, "Sometimes the anomaly code far
outweighs the standard code". With a custom framework, the error recovery depends on
the application. Most detection depends on timing different aspects of the process.
Catching an exception is easy. Spotting a run-a-way thread is difficult. In order to know
what to look for, one must know what the application does.

Thresholds: When to instantiate or activate an application thread is
paramount. The way the code example sits, the only time the framework instantiates or
activates a new thread within a logical process is 1) when no thread is alive in the queue
or 2) when a new request into a wait list causes an overflow.
It is usually better to activate another thread when the load on that queue becomes
greater than, "user determined". This is threshold processing. When the
RMI-Connection thread puts a new request into a wait list, the thread can determine the
current load and may start or activate another application thread.

Hooks and exits:  How does a developer handle connection pools?
How does a developer handle message queuing middleware packages? Remember, the Server is
persistent. You can add a start up hook in which you build a separate memory area where
you keep instantiated classes and private threads for these products. You can add a shut
down hook that gracefully shuts down the separate area.

Logging:  Anyone who has ever worked with a background process
knows how important it is to log errors. How else can anyone know what happened after a
failure? Any general-purpose log will suffice. Commercial products are available today and
the standard language will support logging in the near future.
See also: The open source project, Log4j, Nate Sammons’ article on Syslog, the AlphaWorks
Logging Toolkit for Java, in Resources.

Custom vs. Generic:  This is a custom framework. You build such a
system to support a set of applications. When your requirements are to support a wide
range of applications that do not fit into a set or there is no time to design one
yourself, then the better choice is to purchase a generic, full-feature Asynchronous
Process Manager.

Ok, there’s a lot to it. Nobody claims building backend applications is simple.
Remember, the Java architects put a colossal effort into
building the EBJ and GUI frameworks. What do we now have?

We separated the RMI logic from the application logic. By doing this, we opened up the
world of application queuing and threading, (which is not restricted to RMI). This world
enabled us to:

  • Have RMI-Connection threads and application threads talk to each other
  • Time requests
  • Run autonomous requests without overloading the Server with application threads
  • Run Agents as part of any autonomous request
  • Process multiple requests from a Queue’s wait lists thereby reducing application
    start/stop overhead
  • Tune the Server by keeping counts of every event
  • Control the create/destroy overhead inherent with threading
  • Easily plug-in any application class as the subject of an application thread
  • Effortlessly trap a thread or application class for debugging
  • Run recursive requests from any application
  • Gracefully shut down the RMI Server

Then we enhanced the single-process environment into a request broker capable of
parallel processing. We enriched the common memory environment to:

  • Run parallel queue processing, (request brokering)
  • Use a Monitor to seek out non-performing requests and a method to deal with them
  • Easily log events
  • Add almost any middleware product as an exit or hook
  • Completely customize the code for any application

Henceforth, the RMI Server box is no longer empty.


About the Author
Since his thesis on "Transactional Queuing and Sub-Tasking", Edward Harned has been actively honing his
multi-threading and multi-processing talents. First, leading projects as an employee in
major industries and then as an independent consultant. Today, Ed is a senior developer at
Cooperative Software Systems where, for the last
four years, he has used Java to bring asynchronous-process solutions to a wide range of
tasks. When not talking to threads, he sails, skis and photographs in New England. 


This article was first published by IBM developerWorks.


  • Download the zip file,
    (~160K), for this article. We provide this software under the GNU GENERAL PUBLIC
    LICENSE,  Version 2, June 1991.

©  2002 Cooperative Software Systems, Inc.  All rights reserved.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories