Open SourceJetty Continuations: Push Your Java Server Beyond Its Scalability Limits

Jetty Continuations: Push Your Java Server Beyond Its Scalability Limits

Developer.com content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

With the phenomenal growth in Internet usage, a Web server’s limited scalability often is a bottleneck when handling increased user loads. In fact, scalability is a necessity when business expansion results in more user interaction with your Web applications. These applications include functions such as chat, collaboration, bidding, RSS broadcast, GPS, and real-time data.

Two challenges in particular that current Web servers and applications encounter are the thread-per-connection concurrency model and asynchronous JavaScript and XML (Ajax). One solution for improving existing optimal throughput and scalability without adding resources is Jetty Continuations.

Jetty Continuations suspend an HTTP request and releases the thread to the thread pool. When an event or timeout occurs, it resumes the suspended request. This approach avoids the thread-per-connection limitation of Web servers, allowing the server to scale for heavy loads. It is ideal for applications that must scale for more and more concurrent users, such as chat, price tickers, collaborative editing, etc.

While Continuations is an obvious solution for such applications, they are also complex to implement. Luckily, Continuations introduced an API for request suspension and resumption and in Jetty 7, the Continuations API has been extended as a separate API that can be used in any other Servlet 3.0 container (but it works on Servlet 2.5 containers in locking mode as well).

In this article, I will present a use case that explains how to use the Continuations API. The use case will be the implementation of an alert and job-scheduling module in a Web server that uses Continuation to scale. Because the demo system will not use Ajax Push, it can be extended to scale hundreds of requests for the server. This makes the alert/job-scheduling system lightweight in terms of latency and scalability.

Jetty Continuations Use Case: Stock Trading

I have chosen a hypothetical case of the trading domain. Suppose we are building a Web application for stock trading that allows the user to create his or her own portfolio, access a stock ticker of live feeds, and set up alerts, stock updates, trade summary reports, and so on.

For alert/job-scheduling use cases, suppose we need to implement the following features:

  • Stock alert: Assume this application has a way to set a threshold value for each stock. When the stock price surpasses or falls below that threshold value (depending on whether it is a positive or negative threshold), an alert needs to be sent as a SMS or email. This feature will have more data to crunch because it will need to check each user’s stocks against his/her alert setup.
  • Trade Summary: The application needs to send a trade summary to each user at the user-configured frequency (e.g., every day or every hour).
  • Stock Updates Alert: The user can select his/her favorite stocks and get alerts when substantial changes in stock price occur. The application can be even smarter by reading news feeds from other Web-based financial new providers and alerting the user when any news comes for his/her selected stocks.
  • Data Archive: The application will archive older records from an RDBMS/repository to make the database more efficient and manage disk space.

We need to implement these four tasks in a Web container. If we deploy these jobs on a dedicated Web container that has only this deployment and if these jobs run only, say, once a day, then scaling is not a great challenge (assuming data is not huge and there would not be more jobs with lesser frequency). But let’s use only one container for the whole application that includes:

  • All the UI-based use cases server code (Servlet/JSP, etc.)
  • All the scheduling tasks code and server code (Servlet) that implements the alerts and jobs (stock alert, update, etc.)

Having only one Web server will be easier to maintain but scalability will be an issue. As the UI use cases will allow hundreds of simultaneous connections to open on the server. So even for alerts/jobs implementation using Jetty Continuation would be wise, and for UI-related use cases such as the Stock ticker, Ajax Push (CometD) + Jetty Continuation is the best solution for scalability. In this article, however, we will see that even alert and scheduled jobs can leverage Continuations to put less load on the container.

Figure 1 shows a block diagram of the application inside a Jetty container.

 

 

Three main blocks in Figure 1 deserve some further explanation.

  1. Task Scheduler: The task scheduler should schedule jobs as soon as the server starts. For this, I used the ServletContextListener interface that has been defined in the Servlet API since version 2.3. The implementation of this interface needs to be configured with the application. Containers notify this implementation about any changes to the Servlet context of the Web application using two methods:
    • void contextDestroyed(ServletContextEvent sce)
    • void contextInitialized(ServletContextEvent sce)

    The contextinitialized method will be called by the container when it starts this Web application. So we should schedule all the tasks in this method. The task scheduler will use java.util.Timer to schedule java.uti.TimerTask-implemented tasks.

    For simplicity, the task-scheduling module is part of the Web app only. Otherwise, it could be a Windows scheduled task or employ UNIX cron scheduler utilities.

  2. Tasks: Tasks classes have the java.util.TimerTask class implemented. These tasks will be calling the HTTP URL for executing the tasks business logic.
  3. Tasks/Alert HTTP Services: The actual alerts and jobs are published as Servlets in the container, so that they can be accessed from any external scheduling system as well. These Servlets have the Continuations-related code to release threads and resume after timeouts. This will be explained in next sections.

The Jetty Continuation API in Brief

The Continuation Factory class is used to create a Continuation that is identified by the HttpRequest. This enables the container to resume the old request.

Continuation Continuation = ContinuationSupport.getContinuation(request);

The Continuation class can be used to suspend a request.

Continuation.suspend();

Optionally, Continuation.setTimeout(long) can be used, as is the case in this demo application. This method will set timeout explicitly, allowing the request to complete automatically after timeout (in milliseconds).

The Continuation class can be used to resume a request.

Continuation.resume();

Optionally, Continuation.setTimeout(long) can be used, as is the case in this demo. That way, requests timeout explicitly, allowing the request to complete after timeout (in milliseconds) automatically.

Timeout should be set before suspending the request. If Continuation is timed out and Continuation listeners do resume or complete, then the request will be resumed as an expired request. That can be checked using the Continuation.isExpired() method.

The other option to complete the request is Continuation.

Continuation.resume(response);

This variation can be used if you need to use the response when the request resumes or completes. Otherwise, the same response object would not be available when the request is resumed.

The getServletResponse method returns the Servlet response if it is set using Continuation.resume(response).

Continuation.getServletResponse()

If it is not set, then it will return null.

The Continuation Listener is an event-based listener that you implement if you want to take some action on these events.

public interface ContinuationListener
    {
      public void onTimeout(Continuation Continuation);
      public void onComplete(Continuation Continuation);
    }

The implementation in this article uses this listener, as it’s not a UI-based application and therefore does not use push server handlers. In the demo application, Servlets are called synchronously from TimerTask and have no way of handling asynchronous response.

Registering ContinuationListener is the way to register a listener with Continuation, which allows the Continuation API to call the event methods at the right time. The listener should be registered before calling Continuation.suspend().

Continuation.addContinuationListener(myContinuationListener)

This ends the explanation of the Continuation API. The next section covers Continuation lifecycle and business logic.

Continuation Lifecycle and Business Logic

ServletRequest can be used to communicate the lifecycle to the Servlet component. The demo application uses ServletRequest for this communication: ServletRequest.setAttribute(…), ServletRequest.getAttribute(…) and ServletRequest.getParameter(…).

long timeoutMS = 8000;

Continuation Continuation = ContinuationSupport.getContinuation(request);

String asyncRequestAttr = (String) request.getAttribute(“asyncRequest”);
String asyncRequestParam = request.getParameter(“asyncRequest”);

if ((asyncRequestParam !=null && asyncRequestParam.equals(“Y”))
&&
(asyncRequestAttr ==null || asyncRequestAttr.equals(“N”)))
{
Continuation.addContinuationListener(new MyContinuationListener());
// wait for an event or timeout
Continuation.setTimeout(timeoutMS);
Continuation.setAttribute(“asyncRequest”, “N”);
Continuation.suspend();
return;
}
else if (asyncRequestAttr !=null && asyncRequestAttr.equals(“Y”))
{
resp.append(“< H2> Happy to see response from AsyncServlet < /H2>”);
PrintWriter writer = response.getWriter();
writer.write(resp.toString());
writer.flush();
writer.close();
return;
}
else
{
System.out.println(“no match for requestParam/Attr there is some
problem..: “);
return;
}

A pair of methods provided in Continuation are required for maintaining business logic in sync with the Continuation lifecycle: Continuation.getAttribute(…) and Continuation.setAttribute(…). These methods are used to set/update user-defined key/values (different in different stages of Continuation).

In terms of Continuation Listener logic, the following listener code shows how to resume requests in onTimeout methods.

class MyContinuationListener implements ContinuationListener
{
	@Override
	public void onComplete(Continuation arg0) {
		System.out.println("in complete  ...: ");
		String asyncRequest = (String) arg0.getAttribute("asyncRequest");
		System.out.println(asyncRequest);
		arg0.setAttribute("asyncRequest","N");
		//ASync
		System.out.println("request completed: ");
	}
	@Override
	public void onTimeout(Continuation arg0) {
		System.out.println("onTimeout resuming  ...: ");
		String asyncRequest = (String) arg0.getAttribute("asyncRequest");
		System.out.println(asyncRequest);
		arg0.setAttribute("asyncRequest","Y");
		//ASync			
		arg0.resume();						
		System.out.println("resumed lets see request onTimeout ");
	}

dads
}

 

The onComplete method is implemented because it is required for the class, but it is not very useful in our use case. So, just put in some sysout statements.

After calling Continuation.suspend, the HTTP or TCP/IP connection is not closed. Only the thread that is being assigned to fulfill that request is released to the thread pool in server. This means the lifecycle of the request is extended from the suspend to the resume/complete stages. So with HTTP synchronous connections, the user has to wait until the whole lifecycle is complete (same for HTTP synchronous clients). Therefore, if the user doesn’t want to wait for the response and is not dependent on it, he or she can intentionally set the timeout value from the client end lower than the Continuation timeout. That way, the client can send more requests even when the server is trying to process the old request. Jetty also provides an async HTTP Client API that can also be handy in these situations.

Conclusion

Continuation is an innovative extension to a conventional Java-based Web server that is reaching its limits for handling concurrent requests. Using similar principles, Servlet 3.0 has implemented these features. The demo shown in this article builds a custom task-scheduling module within a Web application and uses Jetty Continuation to improve the scalability of the application. Combining this feature with HTTP can useful in many unconventional applications such as distributed processing with low latency and high throughput, where data is not that critical and can be lost during computation. For instance, if the Web server crashes while holding the Continuation request in memory, then the request will be lost if the data is not kept on some reliable platform.

Acknowledgments

I want to thank my manager Shyam Kumar and Raghavan Subramanian for their guidance and encouragement while I wrote this article.

 

About the Authors

Manish Malhotra is a Technology Lead with SETLabs (Software Engineering and Technology Labs), the R&D division of Infosys Technologies Ltd. He works in the JEECoe group. Apart from designing and implementing Java- and Java EE-based applications, he has also worked on software engineering and semantic analysis research.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories