Jetty Continuations: Push Your Java Server Beyond Its Scalability Limits
With the phenomenal growth in Internet usage, a Web server's limited scalability often is a bottleneck when handling increased user loads. In fact, scalability is a necessity when business expansion results in more user interaction with your Web applications. These applications include functions such as chat, collaboration, bidding, RSS broadcast, GPS, and real-time data.
Jetty Continuations suspend an HTTP request and releases the thread to the thread pool. When an event or timeout occurs, it resumes the suspended request. This approach avoids the thread-per-connection limitation of Web servers, allowing the server to scale for heavy loads. It is ideal for applications that must scale for more and more concurrent users, such as chat, price tickers, collaborative editing, etc.
While Continuations is an obvious solution for such applications, they are also complex to implement. Luckily, Continuations introduced an API for request suspension and resumption and in Jetty 7, the Continuations API has been extended as a separate API that can be used in any other Servlet 3.0 container (but it works on Servlet 2.5 containers in locking mode as well).
In this article, I will present a use case that explains how to use the Continuations API. The use case will be the implementation of an alert and job-scheduling module in a Web server that uses Continuation to scale. Because the demo system will not use Ajax Push, it can be extended to scale hundreds of requests for the server. This makes the alert/job-scheduling system lightweight in terms of latency and scalability.
Jetty Continuations Use Case: Stock Trading
I have chosen a hypothetical case of the trading domain. Suppose we are building a Web application for stock trading that allows the user to create his or her own portfolio, access a stock ticker of live feeds, and set up alerts, stock updates, trade summary reports, and so on.
For alert/job-scheduling use cases, suppose we need to implement the following features:
- Stock alert: Assume this application has a way to set a threshold value for each stock. When the stock price surpasses or falls below that threshold value (depending on whether it is a positive or negative threshold), an alert needs to be sent as a SMS or email. This feature will have more data to crunch because it will need to check each user's stocks against his/her alert setup.
- Trade Summary: The application needs to send a trade summary to each user at the user-configured frequency (e.g., every day or every hour).
- Stock Updates Alert: The user can select his/her favorite stocks and get alerts when substantial changes in stock price occur. The application can be even smarter by reading news feeds from other Web-based financial new providers and alerting the user when any news comes for his/her selected stocks.
- Data Archive: The application will archive older records from an RDBMS/repository to make the database more efficient and manage disk space.
We need to implement these four tasks in a Web container. If we deploy these jobs on a dedicated Web container that has only this deployment and if these jobs run only, say, once a day, then scaling is not a great challenge (assuming data is not huge and there would not be more jobs with lesser frequency). But let's use only one container for the whole application that includes:
- All the UI-based use cases server code (Servlet/JSP, etc.)
- All the scheduling tasks code and server code (Servlet) that implements the alerts and jobs (stock alert, update, etc.)
Having only one Web server will be easier to maintain but scalability will be an issue. As the UI use cases will allow hundreds of simultaneous connections to open on the server. So even for alerts/jobs implementation using Jetty Continuation would be wise, and for UI-related use cases such as the Stock ticker, Ajax Push (CometD) + Jetty Continuation is the best solution for scalability. In this article, however, we will see that even alert and scheduled jobs can leverage Continuations to put less load on the container.
Figure 1 shows a block diagram of the application inside a Jetty container.
Three main blocks in Figure 1 deserve some further explanation.
- Task Scheduler: The task scheduler should schedule jobs as soon as the server starts. For this, I used the ServletContextListener interface that has been defined in the Servlet API since version 2.3. The implementation of this interface needs to be configured with the application. Containers notify this implementation about any changes to the Servlet context of the Web application using two methods:
void contextDestroyed(ServletContextEvent sce)
void contextInitialized(ServletContextEvent sce)
contextinitializedmethod will be called by the container when it starts this Web application. So we should schedule all the tasks in this method. The task scheduler will use
For simplicity, the task-scheduling module is part of the Web app only. Otherwise, it could be a Windows scheduled task or employ UNIX cron scheduler utilities.
- Tasks: Tasks classes have the
java.util.TimerTaskclass implemented. These tasks will be calling the HTTP URL for executing the tasks business logic.
- Tasks/Alert HTTP Services: The actual alerts and jobs are published as Servlets in the container, so that they can be accessed from any external scheduling system as well. These Servlets have the Continuations-related code to release threads and resume after timeouts. This will be explained in next sections.
The Jetty Continuation API in Brief
The Continuation Factory class is used to create a Continuation that is identified by the HttpRequest. This enables the container to resume the old request.
Continuation Continuation = ContinuationSupport.getContinuation(request);
The Continuation class can be used to suspend a request.
Continuation.setTimeout(long) can be used, as is the case in this demo application. This method will set timeout explicitly, allowing the request to complete automatically after timeout (in milliseconds).
The Continuation class can be used to resume a request.
Continuation.setTimeout(long) can be used, as is the case in this demo. That way, requests timeout explicitly, allowing the request to complete after timeout (in milliseconds) automatically.
Timeout should be set before suspending the request. If Continuation is timed out and Continuation listeners do resume or complete, then the request will be resumed as an expired request. That can be checked using the
The other option to complete the request is Continuation.
This variation can be used if you need to use the response when the request resumes or completes. Otherwise, the same response object would not be available when the request is resumed.
getServletResponse method returns the Servlet response if it is set using
If it is not set, then it will return null.
The Continuation Listener is an event-based listener that you implement if you want to take some action on these events.
public interface ContinuationListener
public void onTimeout(Continuation Continuation);
public void onComplete(Continuation Continuation);
The implementation in this article uses this listener, as it's not a UI-based application and therefore does not use push server handlers. In the demo application, Servlets are called synchronously from TimerTask and have no way of handling asynchronous response.
Registering ContinuationListener is the way to register a listener with Continuation, which allows the Continuation API to call the event methods at the right time. The listener should be registered before calling
This ends the explanation of the Continuation API. The next section covers Continuation lifecycle and business logic.
Page 1 of 2