July 30, 2014
Hot Topics:
RSS RSS feed Download our iPhone app

Infrastructure Virtualization: Extending SOA Throughout The Enterprise

  • October 10, 2006
  • By Matt Haynos
  • Send Email »
  • More Articles »

Scale-out and distributed computing continues to gain prominence across the IT infrastructure landscape. Architectural approaches built on this kind of foundation offer inherent advantages, including the potential for price-performance economic benefits and greater flexibility. However, management, availability and throughput concerns can quickly outweigh these associated benefits. As a result, a balanced approach across infrastructure resources is often required. For most organizations, a mixture of both scale-up and scale-out computing resources is the preferred architectural approach and is often dictated by the types of workloads that need to be supported.

To address this challenge, organizations are turning to software that operates equally well across distributed (scale out) and centralized (scale up) resources, and offers a consistent application experience and approach across a number of heterogeneous resource types.

For example, J2EE standards have provided the framework for an application infrastructure that simplifies application development so that it is possible to deploy applications that take advantage of system services without binding the applications directly to the underlying service implementations.

Now, application and infrastructure logic can be kept separate, enabling application developers to focus on building applications rather than building infrastructure code. Through new, open standards based technologies, companies can now build the infrastructure for deploying applications while also having the advanced capabilities such as resource virtualization, high-performance computing and intelligent, policy-based workload management. These combined capabilities deliver the following benefits:

Increased throughput, reduced response times and linear scaling for existing applications. Improved responsiveness in a service oriented architecture (SOA) environment The ability to support new, innovative applications and workload types.

Complementing these capabilities is an emerging trend known as virtualization that offers several unique benefits. Virtualization is the creation of a virtual (rather than actual) version of an operating system, a server, a storage device or network resource, for example.

Virtualization software is being adopted at a rapid pace and can be viewed as part of an overall trend in enterprise IT that includes autonomic computing and utility computing. The goal of virtualization is essentially to centralize administrative tasks while improving scalability and workload management.

Virtualization is becoming a critical part of an SOA strategy for several reasons. First, by separating the underlying infrastructure from the applications that run on it, workloads can be dynamically placed and migrated across a pool of resources. There's no longer a tight binding or one-to-one relationship between an individual machine (or set of machines) and an application. This loose coupling enables open standards based software to intelligently manage and shift workloads according to business policy. High priority applications can be allocated the majority of resources; lower priority applications are either designed to run later or moved to less capable resources. These operations are all seamless to the user, but they require sophisticated job scheduling and workload management capabilities.

Further, this notion of infrastructure virtualization enables the inclusion of a wide range of platforms, including larger machines such as mainframes, symmetric multiprocessor systems, and distributed resources within the resource pool. This virtualization is commonly referred to as a grid. Indeed, the underlying principle of grid computing is the virtualization of both workload and information resources. These resources are typically homogeneous (blade servers or clusters), but they can be and often are heterogeneous.

Support for heterogeneous resources might not seem like an important point, but as virtualized infrastructures continue to gain traction, the ability to include a wide range of resource types within a grid (or pool) of resources and have your infrastructure software use and manage them in a seamless fashion, provides flexibility and choice. Using the new raft of software dedicated to support virtualization empowers application developers and system administrators to view the infrastructure resources as a single, consistent entity. This infrastructure virtualization also provides a foundation for your organization to help increase infrastructure value.

Virtualization and SOA

As you know, SOA is an approach to information technology in which a company's existing computing systems become more responsive and closer aligned to business goals. An SOA lets you build, deploy and integrate services for IT resources, applications and business process flows. It facilitates integration, enables you to modularize applications and provides a coherent view of a business process as a set of coordinated services. You can reuse these services to more rapidly build new applications and business processes across your enterprise and with partners, suppliers and customers more seamlessly.

The momentum behind SOA has been significant across the industry, but to use this approach effectively, you need to have the corresponding flexibility in your underlying infrastructure. Using software that spans both transactional and long running workloads can create an integrated environment that dynamically determines how to optimally allocate application infrastructure resources based on customer defined business goals. This provides the necessary flexibility and helps companies realize the full benefits of SOA.

Virtualization extends the SOA Value Proposition

Although the core virtualization foundation enables the sophisticated workload and autonomic management, it also provides a platform for infrastructure optimization. The central concept here is one of helping to drive up infrastructure value and lower total cost of ownership by increasing resource usage across a virtualized pool of resources. Simply put, to drive up infrastructure usage, you either consolidate workload on fewer resources or you increase the amount of work your existing infrastructure is doing.

However, to realize the business-grid benefits of virtualization software, it's important to identify application candidates across organizations within a company. Infrastructure virtualization and horizontal integration require a common infrastructure that can host multiple lines of business (LOB) applications.

Businesses and departments have to share infrastructure resources, which is the whole point of infrastructure virtualization and a prime motivation behind SOA. Although this might seem like a challenging proposition, the benefits can be significant.

Companies that have deployed cross-company virtualized infrastructures have realized benefits in economies of scale and overall lower operational costs. Additionally, it's easier to set priorities across multiple applications so that each business unit realizes its fair share of infrastructure resources. This capability helps mitigate reluctance from LOB organizations to share and use a common infrastructure.

Summary

With companies looking to unlock infrastructure value to realize new levels of business success, virtualization software that supports an SOA strategy delivers tremendous IT and bottom line benefits.

As companies continue to implement SOA throughout the enterprise, many services --some fine-grained some larger-grained, some mobile - are going to require a responsive and scalable IT infrastructure. To this end, virtualization software provides a resilient, scalable, highly available and fault-tolerant infrastructure using autonomic capabilities to meet service level agreements and business performance needs. With these capabilities, you can optimize the resource utilization and management of your IT infrastructure, while enhancing the quality of service for your critical applications.

About the Author

Matt Haynos is a program director on the IBM Grid Technology and Strategy team, based in Somers, N.Y. He has various responsibilities on the team covering a broad range of initiatives related to building the IBM grid computing business. He has held a variety of technical and managerial positions within IBM in the application development, program direction, and business development areas. He holds a bachelor's degree in computer science/applied mathematics and cognitive science from the University of Rochester, and a master's degree in computer science from the University of Vermont.




Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 


Sitemap | Contact Us

Rocket Fuel