October 22, 2014
Hot Topics:
RSS RSS feed Download our iPhone app

Implementing Highly Available and Scalable Solutions Using the WebLogic Cluster

  • October 3, 2003
  • By Prem, Ciconte, Devgan, Dunbar, & Go
  • Send Email »
  • More Articles »

This material is from the book BEA WebLogic Platform 7 , written by by Jatinder Prem, Bernard Ciconte, Manish Devgan, Scott Dunbar, Peter Go, published by Sams Publishing.

© Copyright Sams Publishing. All rights reserved.


In This Article

  • The Motivations for Clustered Infrastructure Solutions: Scalability and High Availability
  • Understanding WebLogic Clusters
  • Understanding Which Objects Can Be Clustered

The Motivations for Clustered Infrastructure Solutions: Scalability and High Availability

Scalability and high availability (fault resilience) are two key infrastructure adaptability requirements that organizations must reflect in the architectural (system) design of their mission-critical e-business solutions. As illustrated in Figure 1, during the client/server era, scalability and high-availability solutions were primarily implemented in the Database or Server tiers, where


Figure 1 Scalability and high availability within a client/server architecture.

To implement an agile and robust J2EE e-business solution, scalability and high availability solutions for the Database tier still remain applicable as they did for the client/server system, but now they address the Enterprise Information System (EIS) tier. However, as illustrated in Figure 2, scalability and high availability must now also be addressed at the distributed middle tiers of the J2EE Application Programming Model—the Presentation (Web servers) and Application (Application servers) tiers—which brings a whole new dimension of challenges. These challenges are as follows:

  • Any potential points of failure must be masked from system users through effective Web and J2EE server failover mechanisms, thus eradicating or minimizing an application's downtime.

  • Performance should not be compromised for scalability through the dynamic introduction of additional online Web and J2EE servers and hardware.

  • Scalability and high-availability solutions should not incur complex development or management efforts for realization.

  • The hardware and operating system portability of J2EE solutions should not be constrained through the mechanics of introducing scalability or high availability.


Figure 2
Scalability and high-availability requirements within the J2EE Application Programming Model.

Scalability and high availability within a J2EE architecture are achieved through the implementation of client-request load-balancing techniques in combination with the clustering capabilities of the J2EE application server that constitutes the middle tier, such as the BEA WebLogic Server cluster. A cluster cannot possess scalability or high availability without the support of an intelligent and robust load-balancing service.

A cluster in a J2EE architecture is generally defined as a group of two or more J2EE-compliant Web or application servers that closely cooperate with each other through transparent object replication mechanisms to ensure each server in the group presents the same content. Each server (node) in the cluster is identical in configuration and networked to act as a single virtual server. Client requests directed to this virtual server can be handled independently by any J2EE server in the cluster, which gives the impression of single entity representation of the hosted J2EE application in the cluster.

The following sections introduce the three highly interrelated core services—scalability, high availability, and load balancing—that any J2EE server clustering solution must provide.

How these services are implemented within WebLogic Server will be discussed later in this chapter.

Scalability

Scalability refers to the capability to expand the capacity of an application hosted on the middle tier without interruption or degradation of the Quality of Service (QoS) to an increasing number of users. As a rule, an application server must always be available to service requests from a client.

As you may have discovered through experience, however, if a single server becomes over-subscribed, a connecting client can experience a Denial of Service (DoS) or performance degradation. This could be caused by a computer's network interface, which has a built-in limit to the amount of information the server can distribute regardless of the processor's capability of higher throughput, or because the J2EE server is too busy servicing existing processing requests.

As client requests continue to increase, the J2EE server environment must be scaled accordingly. There are two approaches to scaling:

  • Forklift method—This method involves replacing the old server computer with a new, more robust and powerful server to host the J2EE server. The problem with this approach is that it is a short-term fix. As traffic continues to increase, the new computer will likely become obsolete, like the server it replaced.

  • Clusters—Clustering the J2EE servers makes it easy to dynamically increase the capacity of the cluster by just adding another node and updating the configuration of the load balancer to use the additional resource. Load balancers use a variety of algorithms to detect server request flows and monitor server loads to distribute server requests optimally across the cluster's nodes. Conversely, you can just as easily remove a node to scale down or replace a node during normal maintenance or upgrading.

By applying conventional wisdom, the most logical method for achieving scalability is though the implementation of a clustering solution.





Page 1 of 3



Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 


Sitemap | Contact Us

Rocket Fuel