dcsimg
June 23, 2018
Hot Topics:

Implementing Highly Available and Scalable Solutions Using the WebLogic Cluster

  • October 3, 2003
  • By Prem, Ciconte, Devgan, Dunbar, & Go
  • Send Email »
  • More Articles »

High Availability

High availability refers to the capability to ensure applications hosted in the middle tier remain consistently accessible and operational to their clients.

High availability is achieved through the redundancy of multiple Web and application servers within the cluster and is implemented by the cluster's "failover" mechanisms. If an application component (an object) fails processing its task, the cluster's failover mechanism reroutes the task and any supporting information to a copy of the object on another server to continue the task. If you want to enable failover:

  • The same application components must be deployed to each server instance in the cluster.

  • The failover mechanism must be aware of the location and availability of the objects that comprise an application in a cluster.

  • The failover mechanism must be aware of the progress of all tasks so that the copy of a failed object can continue to complete a task where the processing last stopped without duplicating persistent data.

In the event of a failure to one of the J2EE servers in a cluster, the load-balancing service, in conjunction with the failover mechanism, should seamlessly reroute requests to other servers, thus preventing any interruption to the middle-tier service.

Additional Factors Affecting High Availability

In addition to application server clustering, which provides high availability in the middle tier of an application architecture, organizations must accept that people, processes, and the technology infrastructure are all interdependent facets of any high-availability solution. People and process issues comprise at least 80% of the solution, with the technology infrastructure assuming the remainder.

From a people and process perspective, the objective is to balance the potential business cost of incurring system unavailability with the cost of insuring against planned and unplanned system downtime. Planned downtime encompasses activities in which an administrator is aware beforehand that a resource will be unavailable and plans accordingly—for example, performing backup operations, making configuration changes, adding processing capacity, distributing software, and managing version control. Unplanned downtime, also known as outages or failures, includes a multitude of "What happens if" scenarios, such as

  • What happens if a disk drive or CPU fails?

  • What happens if power is lost to one or more servers by someone tripping over the power cord?

  • What happens if there is a network failure?

  • What happens if the key system administrator finds a better job?

In practice, organizations should initially focus on developing mature, planned downtime procedures before even considering unplanned downtime. This is supported through extensive studies conducted by research firms, which concluded that 70–90% of downtime may be directly associated with planned downtime activities. However, the organizational reality indicates that more time and effort are applied to preventing unplanned downtime.

From a technology infrastructure perspective, for a system to be truly highly available, redundancy must exist throughout the system. For example, a system must have the following:

  • Redundant and failover-capable firewalls

  • Redundant gateway routers

  • Redundant SAN switching infrastructure

  • Redundant and failover-capable load balancers/dispatchers

  • Redundant Enterprise Information System (EIS) layer, for example, content management systems, relational databases, and search engine systems

As stated earlier, the extent of redundancy should be directly related to the business cost of system unavailability versus the realized cost of insuring against system downtime.

Load Balancing

For a server cluster to achieve its high-availability, high-scalability, and high-performance potential, load balancing is required. Load balancing refers to the capability to optimally partition inbound client processing requests across all the J2EE servers that constitute a cluster based on factors such as capacity, availability, response time, current load, historical performance, and also administrative weights (priority) placed on the clustered servers.

A load balancer, which can be software or hardware based, sits between the Internet and the physical server cluster, also acting as a virtual server. As each client request arrives, the load balancer makes near-instantaneous intelligent decisions about the J2EE server best able to satisfy each incoming request. Software-based load balancers can come in the form of computers, routers, or switches with integrated load-balancing software or load-balancing capabilities. Hardware load balancers are separate pieces of equipment that provide advanced load-balancing features and additional reliability features such as automatic failover to a redundant unit.





Page 2 of 3



Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 


Enterprise Development Update

Don't miss an article. Subscribe to our newsletter below.

By submitting your information, you agree that developer.com may send you developer offers via email, phone and text message, as well as email offers about other products and services that developer believes may be of interest to you. developer will process your information in accordance with the Quinstreet Privacy Policy.

Sitemap

×
We have made updates to our Privacy Policy to reflect the implementation of the General Data Protection Regulation.
Thanks for your registration, follow us on our social networks to keep up-to-date