October 25, 2014
Hot Topics:
RSS RSS feed Download our iPhone app

The Java RMI Server Framework

  • March 29, 2002
  • By Ed Harned
  • Send Email »
  • More Articles »

Autonomous request
The autonomous request does not require the RMI-Connection thread to wait for completion. Therefore, the autonomous request may seem very simple, but there is one catch. What happens to the return data from the doWork() method of the application processing class? It should not be the concern of an application where its return data goes. A developer must be able to use the same application for a synchronous or asynchronous request. Therefore, we need an Agent for the autonomous request.

As part of the class definition of a Function is a field for the optional Agent Queue. The application processing class may return an Object to the application thread. For an autonomous request, when desirable, the application thread may activate a logical process by creating a new "enhanced request", (with the just-returned Object), placing that "enhanced request" into the Agent queue's wait list and waking up an Agent thread. This is very similar to a standard autonomous request that comes from the Client. The Agent logical process completes asynchronously, without any return data. The Agent queue application processing class is where you may place the call-back, call-forward or any other logic necessary for dealing with a request completion. [This is a little confusing, but hang in there, eventually it becomes clear.]

Shut down
In order to gracefully shut down the RMI Server, (rather than using something like a kill -9 <pid>), every implementation should contain a shut down method. However, if the shut down method simply ends the Java Virtual Machine, the method's return message never makes it back to the Client. The better way is to start a shut down thread. The shut down thread sleeps about two seconds, (to give the method's return message a chance to clear the virtual machine), and then the shut down thread issues System.exit(0).

Critique
That wasn't so bad. Now it is time to ask the three major questions necessary for every project:

  1. What good does it do?
  2. Does it achieve what it promises?
  3. Is there anything more to it than its pompous invocation of time immemorial?

Both answers to questions one and two are affirmative.

  1. This framework gives us the ability:
  • to time RMI Server requests
  • to run autonomous RMI Server requests
  • to limit the create/destroy overhead common with application threads
  • to curtail the thread overload problem
  • to easily debug requests, especially autonomous requests. (We know the application thread and the application class where the work resides. There is no need to trap extraneous code.)

This framework separates the RMI threading environment from the application threading environment.  Additionally, it separates the application thread logic from the actual application logic. This is the greater degree of abstraction so important in Object Oriented Design.

  1. As you will see from the zip file code, this framework performs superbly.

The answer to the third question is: it's nice, but ... The return on investment may not be worth the effort involved. An additional, critical part of the structure is error recovery. Sometimes the anomaly code far outweighs the standard code. What this framework needs is a bigger reason for living.

What if we could expand this simple framework to support multiple queues per request? That is -- when a Client request involves multiple accesses to resources, if we could split the request and place each component into its own queue, then we could parallel process the request. Now, the possibilities are endless. This is request brokering and it is the subject of the next section.


The Request Broker
Having set up a basic queuing environment it soon becomes evident that some requests really contain multiple actions, or components. For instance -- a request may require accesses to two different databases. We could access each database in a linear fashion but the second access must wait for the first to complete.

A better way to handle a multi-action request is to separate the request into its component parts and place each component into a separate queue. This is parallel processing. It is more difficult than linear programming but the benefits far outweigh the extra work up front.

What do we need to support request brokering? We need to understand that the Client request is no longer the exclusive concern of a single logical process. Therefore, we must put the Client's request into a common area so that any number of logical processes may access it. Since we already have a "common memory" environment, we must now enhance it.

We need a common place to put the Object from the Client, (this is the input data Object within the FrameWorkParm class). A simple array of Objects is all that is necessary. For the example, the basic class is ObjDetail. The ObjDetail class contains two fields, the Object from the Client and a status indicator.

public final class ObjDetail {

   private Object obj;     // object from Client
   private int    status;  // 0 = available, 1 = busy

The array in which the ObjDetail Object resides is a linked-list. (All the arrays in this framework are linked-lists.) Access to entries within the linked-list is directly, by subscript. When an RMI-Connection thread puts an Object into the list, all that the RMI-Connection thread must pass to an application thread is the primitive integer.

We need a common place to hold the request from the Client both for a synchronous and for an asynchronous request. Once again, simple arrays of Objects are all that is necessary. Both basic Objects must keep the subscript to the Object from the Client, (above), and an integer array of subscripts to the Objects from the applications' return data. The classes for this example are SyncDetail and AsyncDetail.

public final class SyncDetail {

    private int[] output;        // output data array pointers
    private int   input;         // input area pointer, if any
    private int   status;        // 0=avail 1=busy
    private int   nbr_que;       // total queue's in function
    private int   nbr_remaining; // remaining to be processed
    private int   next_output;   // next output in list   
    private int   wait_time;     // max wait time in seconds 
    private Object requestor;    // requestor obj to cancel wait
    private int[] pnp;           // 0 not posted, 1 is posted
public final class AsyncDetail {

    private int   input;         // pointer to input
    private int   out_agent;     // pointer to agent name
    private int   function;      // pointer to function name
    private int   nbr_que;       // nbr of queues in function
    private int   nbr_remaining; // remaining unprocessed
    private int   status;        // 0 = available, 1 = busy
    private int   next_output;   // next output subscript
    private int[] que_names;     // subscripts of all the queues
    private int[] output;        // output array

Another common place to hold information for an asynchronous request is an array of those requests that have stalled. When the synchronous request takes longer than the user can wait, the connection side of the request terminates. When an asynchronous request takes longer than is prudent, the processing may be unable to complete and the request stalls. There must be a place to put the information and a procedure for recovering from the stall. The place is the StallDetail class. The thread that places the information there is the Monitor, (we'll get to this shortly). The procedure is the exclusive needs of the user.

public final class StallDetail {

    private long  entered;       // time entered
    private long  at_name;       // Async Array generated name
    private int   gen_name;      // Async Array pointer
    private int   status;        // 0 = available, 1 = busy
    private int   times_checked; // times checked
    private int   failed_reason;  // why it is here

The request Function
How can a Server know the components of a request? The component structure is information the developer knows from the beginning. In the basic framework, there is a single queue for each Function. In the request broker framework, there is a list of queues for each Function. The list of queues associated with each Function is the component structure. The class is the FuncDetail array.

public final class FuncDetail {

   private String name;     // Function name
   private long   used;     // times used
   private int    agent;    // optional agent queue subscript
   private int    nbr_que;  // number of queues in this entry
   private int[]  qtbl;     // array of queues

 

Memory Referencing

Figure 3  Common Memory Referencing

The framework:

  • Places the passed Object, (input data), from the Client parameter, (FrameWorkParm), into the ObjDetail array
  • Creates a [A]SyncDetail Object with the list of queues and the subscript to the input data
  • Places the integer subscript for the [A]SyncDetail Object into the wait list of each queue in the list according to its priority. If the wait list for that priority is full, the subscript goes into the next higher wait list and sets an overflow- indicator.
  • Wakes up a thread on each queue in the list

For the synchronous request, the framework waits until all queues finish processing. The framework then concatenates the return objects from all the logical processes into a single Object array and returns the Object array to the Client.

For the autonomous request, the framework returns to the Client with an "its been scheduled" message. The processing takes place asynchronously. When the last queue's application finishes processing, the framework optionally concatenates the return objects from all the logical processes into a single Object array and activates a new logical process, the Agent.

The Monitor
An additional requirement for any asynchronous process is a way to monitor the logical processes. The autonomous requests execute without any task waiting for their completion and when they stall, detecting that stall is difficult. One way to monitor the environment is with a daemon thread that scans the environment periodically. Daemon simply means that it is not part of a particular application or RMI-Connection. When the monitor thread finds a problem, it may log the problem, send a message to a middleware message queue or internally notify another remote object. The action depends on the application. A common function is to place the details of the autonomous request into a stalled array, (StallDetail). What to do then is also application dependent.

Runtime
Now that we've talked about threads and queues we're sure it is a little confusing. It is time to put it all together with a demonstration. If you haven't downloaded the zip file yet, then do so now.

This demonstration requires minimally the Java1.1 platform. Unzip the file into a directory. The structure is as follows:

/Doc - Contains a single file, Doc.html, which documents all the classes and the runtime procedure.
/Source - Contains all the source code for this article
/Classes - Contains all the class files for this article including a policy.all file for security.

Open the /Doc/Doc.html file for directions.

Follow the directions for starting the RMI Registry and the FrameWorkServer, (section, Runtime).

Follow the directions for starting a single-access Client, DemoClient_3, who's Function is F3, which comprises three queues, (this is the section, the first time).

This is what took place. The Client invoked the syncRequest() method on the FrameWorkServer remote object passing a FrameWorkParm Object. The syncRequest():

  • Found that the requested function, F3, contained three queues, Q1, Q2 and Q3
  • Saved the Client's passed input data Object in the ObjDetail array
  • Saved the "enhanced request" in the SyncDetail array and placed the subscript to it into wait lists in Q1, Q2 and Q3
  • Found that Q1 had no threads alive, so it instantiated a new thread*
  • Found that Q2 had no threads alive, so it instantiated a new thread*
  • Found that Q3 had no threads alive, so it instantiated a new thread*
  •     * [Had there been a thread alive, the syncRequest() would only have had to notify() it.]
  • Waited for the application to finish processing
  • When notified of the completion, it picked up the return objects from each logical process, concatenated each object into an Object array and returned the Object array to the Client.

While the syncRequest() was waiting, each application thread:

  • Searched the wait lists for the first available request
  • Picked up the "enhanced request" from the SyncDetail array
  • Picked up the Client's passed Object from the ObjDetail array
  • Called the appropriate application processing class to actually do the work for the Queue
  • Saved the return Object from the application processing class in the "enhanced request"
  • (When it determined all other Queue's finished processing), it 'woke up' the waiting RMI-Connection thread
  • Searched the wait lists for the first available request and since none was found,
  • Issued a wait() until the next notify().

Load it up
The excitement comes when many Clients hit on the Server simultaneously. Additionally, without a visualization tool you would have no way of knowing what is going on. Within this package, there are two classes that do just that, (this is the section, load it up).

Follow the directions for running the visualization tool, FrameWorkThreads.

Follow the directions for running the multiple Client threads class, DemoClientMultiBegin, to put a load on the system.

After you are done with the Server, you may shut it down gracefully with a Client request, DemoClient_Shutdown.

Sequel
In this brief article, we can only examine the skeleton of an Asynchronous Process Manager. Some supplemental elements are:

Error recovery: As above, "Sometimes the anomaly code far outweighs the standard code". With a custom framework, the error recovery depends on the application. Most detection depends on timing different aspects of the process. Catching an exception is easy. Spotting a run-a-way thread is difficult. In order to know what to look for, one must know what the application does.

Thresholds: When to instantiate or activate an application thread is paramount. The way the code example sits, the only time the framework instantiates or activates a new thread within a logical process is 1) when no thread is alive in the queue or 2) when a new request into a wait list causes an overflow. It is usually better to activate another thread when the load on that queue becomes greater than, "user determined". This is threshold processing. When the RMI-Connection thread puts a new request into a wait list, the thread can determine the current load and may start or activate another application thread.

Hooks and exits:  How does a developer handle connection pools? How does a developer handle message queuing middleware packages? Remember, the Server is persistent. You can add a start up hook in which you build a separate memory area where you keep instantiated classes and private threads for these products. You can add a shut down hook that gracefully shuts down the separate area.

Logging:  Anyone who has ever worked with a background process knows how important it is to log errors. How else can anyone know what happened after a failure? Any general-purpose log will suffice. Commercial products are available today and the standard language will support logging in the near future.
See also: The open source project, Log4j, Nate Sammons' article on Syslog, the AlphaWorks Logging Toolkit for Java, in Resources.

Custom vs. Generic:  This is a custom framework. You build such a system to support a set of applications. When your requirements are to support a wide range of applications that do not fit into a set or there is no time to design one yourself, then the better choice is to purchase a generic, full-feature Asynchronous Process Manager.

Conclusion
Ok, there's a lot to it. Nobody claims building backend applications is simple. Remember, the Java architects put a colossal effort into building the EBJ and GUI frameworks. What do we now have?

We separated the RMI logic from the application logic. By doing this, we opened up the world of application queuing and threading, (which is not restricted to RMI). This world enabled us to:

  • Have RMI-Connection threads and application threads talk to each other
  • Time requests
  • Run autonomous requests without overloading the Server with application threads
  • Run Agents as part of any autonomous request
  • Process multiple requests from a Queue's wait lists thereby reducing application start/stop overhead
  • Tune the Server by keeping counts of every event
  • Control the create/destroy overhead inherent with threading
  • Easily plug-in any application class as the subject of an application thread
  • Effortlessly trap a thread or application class for debugging
  • Run recursive requests from any application
  • Gracefully shut down the RMI Server

Then we enhanced the single-process environment into a request broker capable of parallel processing. We enriched the common memory environment to:

  • Run parallel queue processing, (request brokering)
  • Use a Monitor to seek out non-performing requests and a method to deal with them
  • Easily log events
  • Add almost any middleware product as an exit or hook
  • Completely customize the code for any application

Henceforth, the RMI Server box is no longer empty.

 

About the Author
Since his thesis on "Transactional Queuing and Sub-Tasking", Edward Harned has been actively honing his multi-threading and multi-processing talents. First, leading projects as an employee in major industries and then as an independent consultant. Today, Ed is a senior developer at Cooperative Software Systems where, for the last four years, he has used Java to bring asynchronous-process solutions to a wide range of tasks. When not talking to threads, he sails, skis and photographs in New England. 

Acknowledgements

This article was first published by IBM developerWorks.


Resources

  • Download the zip file, (~160K), for this article. We provide this software under the GNU GENERAL PUBLIC LICENSE,  Version 2, June 1991.

©  2002 Cooperative Software Systems, Inc.  All rights reserved.





Page 2 of 2



Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 


Sitemap | Contact Us

Rocket Fuel