Mandar Chitnis spoke with Rob High, Distinguished Engineer and Chief Architect of the
WebSphere Application Server product family at IBM.
Rob is a Distinguished Engineer and the Chief Architect for the WebSphere
Application Server product family. He has worked in object-oriented
programming for the past last eleven years, and in distributed computing
systems for about fifteen years. Prior to his current role, Rob majored in
the security and system management architecture for the early WebSphere
releases, was the technical lead for the object services development in
Component Broker, and led the object services and object management
framework development for SOMObject server.
Q: IBM and Microsoft are closely collaborating on Web services. To what extent is this useful for developers using the WebSphere platform on the Windows operating system?
A: First, the collaboration between IBM and Microsoft to define Web services specifications will help programmers on any platform benefit from experience from IBM’s experience over the past 3-4 decades. We know what it takes to make these systems work and to be useful for mission-critical business computing. Furthermore, IBM has experience in working with open-systems industry standards and will help ensure the specifications for Web services are submitted for governance by open standards organizations.
Since Web services serve as a critical element within the future of distributed computing, both companies recognize the importance of interoperation. Microsoft has been working with IBM to ensure that our respective platforms – WebSphere and .NET – interoperate. We have contributed significantly to the Web Service Interoperation organization (WS-I.org) to ensure a practical and consistent interpretation of the Web services specifications and appropriate test vehicles for assuring interoperation of Web services. This, in particular, ensures that .NET developers will be able to leverage their business services hosted by WebSphere and vice-versa – either those that are also hosted on the Windows platform, or those hosted any other production platform such as AIX, Solaris, HP/UX, iSeries, zSeries, or Linux.
Q: Are we going to see interoperability between WebSphere (J2EE) and .NET? Will developers be able to leverage .NET (CLR) capabilities in applications deployed on WebSphere?
A: The primary mechanism for interoperation with the .NET platform will be through Web services. Other approaches are possible, but they would result in a tightly coupled interdependence between technologies. Web services are designed specifically to enable loosely coupled interworking between application components – so applications and underlying technology platforms can continue to innovate and evolve without interrupting the relationship between the parts hosted on different platforms.
Q: IBM does a lot of business with customers who run both WebSphere and Windows. How much additional flexibility does WebSphere offer systems tied to Windows?
A: Quite a bit. WebSphere removes their dependence on a single vendor; it enables their Windows solution to communicate with their legacy applications in a fairly straightforward manner. It broadens a company’s choices in terms of business partner selection and applications. It provides deeper integration to drive better results such as enabling an on demand infrastructure. It’s the most complete offering in the industry. There are other enterprise platforms to consider other than .NET – even for use with Windows.
Q: What support does WebSphere provide for messaging and guaranteed delivery on the Windows operating system?
A: WebSphere supports the Java Messaging Service specification as required by J2EE. The WebSphere Application Server implementation employs an embedded version of the WebSphere MQ product – delivering all the same reliable and guaranteed message delivery support that is expected of the MQ platform itself.
IBM believes the value of services-oriented architecture is in defining the basic principles of loosely-coupled component design, and the expression of those component definitions through a language and technology-neutral abstract representation of the component with WSDL. We exploit the inherent flexibility of WSDL to enable communication to web services through a variety of different underlying protocols and message encodings, each representing a different set of quality-of-service trade-offs.
For example, the SOAP over HTTP binding normally associated with web services can ensure interoperation with Web services deployed on other vendor platforms. However, on WebSphere, the same web service can be deployed to also, or alternately, support SOAP over JMS (leveraging the reliable messaging infrastructure of MQ), or RMI/IIOP, or even a local Java Object call. The client and server-side components are deployed in the same manner – the differences in protocol are subsumed and encapsulated under a common proxy interface based on the JAX-RPC standard. In this way, the application is programmed in one fashion and the underlying binding is selected by the administrator at deployment time based on the QOS requirements of the installation.
Q: Incorporating security is a critical area in applications. What kind of security models/frameworks does the WebSphere platform support?
A: WebSphere supports a declarative roles-based permission model based on the standard J2EE specification for Servlets and EJBs – this is backed by the underlying security-manager architecture of the base J2SE platform; the Java Authentication and Authorization Service (JAAS), and the Java Cryptograpic Engine (JCE). This model is fundamentally important to ensuring the integrity of business applications for two reasons.
First, the declarative approach dissuades application developers from trying to encode security policy into their applications – the declarative model attributes security policy to the application as a combination of deployment descriptors coupled with a pluggable policy administration mechanism. In this way, the security administrators of your enterprise can ensure the right protection policies are being applied at the right time – or change those policies if they perceive a new threat, and without having to change the application. More so, security protections can be applied to the application even if the programmer forgets to consider this during their application development.
Second, since the security model is based on roles and permissions – ‘a user is granted permission to perform a certain role in the context of a given application’ – the administration process is more intuitive and helps ensure security policies are applied more consistently and correctly across all the business functions in all applications to which that role is relevant. This reduces the potential for loopholes that could be exploited by thieves and other rogues.
The entire system is layered over different pluggable providers. In particular, a plug-in is provided by WebSphere to allow the Windows ActiveDirectory to be used as the user registry for WebSphere allowing customers to define their users once for use by both Windows and WebSphere. However, since the WebSphere security model is independent of Windows, it can provide a consistent approach to security resources across all of the platforms used in your enterprise, and can be used to avoid some of the pervasive flaws of the Windows platform.
Q: Does the WebSphere platform include any inherent support/features to ensure high availability of applications on Windows similar to what we see in Unix?
A: The same clustering technology introduced by WebSphere is available on all the platforms supported by WebSphere – including Windows. The WebSphere clustering support provides for vertical and horizontal scaling – allowing you to add more processing support for your application as your application workload increase workload balancing, fail-over, and continuous operations. WebSphere clustering can work over a network of mixed platforms. For example, you can configure one application server instance to run on Windows and another to run on AIX in the same cluster.
You can set different weighting values to the different server instances. If, for example, you have more capacity available to you on your Unix computer, you can set it with a higher weighting, WebSphere will then route more workload to that computer. If a computer fails then all of the workload will be routed to the other computers in the cluster until that computer is restored. When you apply an upgrade to your application you can ripple the cluster, that is, direct WebSphere to stop receiving work on one application server instance at a time while you update the software on that server, while continuing to receive workload at the other servers. Thus, it appears to the client that your application is up all the time, even as you go through the application upgrade.
Q: Could you tell us a bit more about WebSphere Express? How is it different in terms of capabilities as compared to the WebSphere platform enterprise version?
A: WebSphere Express is one of four offerings in the WebSphere Application Server product family. There’s the ‘base application server’ (formerly called Standard Edition), which supports the entire J2EE specification and is fully certified. It enables a single-server configuration by itself. The Network Deployment edition of WebSphere extends the base offering with support for administering multiple application servers in a single administrative cell, including clustering support. WebSphere’s Enterprise edition extends the base application server with advanced programming features for business rules processing, workflow management, parallel processing, distributed internationalization services, and so forth – in many cases delivering today features that we expect will become part of the Java standard in the future.
WebSphere Express is targeted at developers who only need support for Web application components – JSPs and Servlets. It uses the same underlying application server technology included on our zOS platform and embedded in the WebSphere Studio Application Developer unit test environment. This ensures a completely consistent set of behaviors across all platforms while offering a range of scalability and quality-of-service. The applications you write will otherwise remain the same and deliver the same functional behavior regardless of where you choose to deploy it.
Q: Is there any initiative to include workflow capabilities in the WebSphere platform?
A: WebSphere has support micro- and macro-flow for nearly a year. The engine is based on the Web Services Flow Language (WSFL). IBM and Microsoft combined WSFL and XLang to form the Business Process Execution Language (BPEL), which we recently donated for standardization through OASIS. BPEL will be supported in a future release of WebSphere Enterprise.
The workflow support in WebSphere allows you to compose business processes by orchestrating activities of the workflow. These activities are implemented as Web services, and thus the workflow engine can orchestrate activities that are implemented and deployed on different platforms within the network, either locally or across domains. We differentiate between micro- and macro-flows depending on whether the flow state is persistent. Micro-flows execute to completion, but the intermediate state of the flow is not persisted and therefore the flow cannot be interrupted. On the other hand, the flow state of a macro flow is persisted, and therefore a flow instance can be interrupted, and resumed later. Further, sub-flows of a workflow can be included in a compensation-scope – effectively, the outcome of the sub-flow can be recovered in the event that an error occurs during the sub-flow. These sub-flows are not transacted using a distributed 2-phase commit protocol – doing so would imply holding locks on transaction state over long periods that would eventually impede total system throughput. Rather, the sub-flow is made atomic through compensation definition for every activity defined in the sub-flow, and ‘undo’ activity is defined that will reverse the results of its corresponding activity. The workflow engine logs the activities performed in the sub-flow. If any activity in the sub-flow fails, the compensating activity for each activity performed is executed return the state of the system back to where it was at the beginning of the sub-flow, ensuring a binary outcome for the sub-flow. Either all of the activities of the sub-flow are performed, or none of them are.
Q: How do you foresee the future of Eclipse with WebSphere? Will we see more compatible tools for developers on Windows platform?
A: As an open-source framework many different vendors are creating tools based on Eclipse for a whole variety of platforms, including Windows and .NET. You should expect that WebSphere Studio Application Developer and other vendors will add tools that are specific to the .NET platform as the .NET platform matures and as it grows in acceptance.
Q: Finally, in passing, we are hearing a lot about autonomic computing. Is the WebSphere platform taking steps towards autonomic computing?
A: Data centers are growing increasingly complicated. Administrators are expected to manage computing systems for increasingly larger numbers of end users, requiring ever-greater amounts of computing capacity. Distributed computing provides a rich and powerful approach to increasing capacity beyond the normal limits of single computers, avoiding bottlenecks and single points of failure. But they also add to the complexity of the data system. Applications are often developed with specific dependencies on technology and certain resources, and when they are acquired from a number of different sources to cover the needs of the business, they add to the variety of technologies deployed in the data center. All this contributes to a situation that is beyond the limits of administrators to manage efficiently. Intermittent failures, built-in maintenance assumptions, fluctuations in workload, and ever-changing business requirements all vie for the attention of administrators.
The key to allowing information systems to grow to the demands of the business is to increase the intelligence of software and hardware employed in those information systems. Autonomics is the ability for information systems to regulate themselves automatically, in response to the dynamics of the system, and striving to meet the goals of the computing center as specified by administratively defined policies. Autonomics are being built right into the core of the WebSphere Application Server. Last year WebSphere introduced support for auto-tuning, which monitors the real time performance of the system based on more than 100 metrics built into the runtime. It looks for trends and patterns that suggest an imbalance in the system, and makes recommendations for adjusting the configuration and controls of the system to improve its peak efficiency.
In addition, WebSphere provides recommendations directly to administrators to allow them to build confidence in the advice it creates. Future release of WebSphere will enable direct control of the systems operation in the auto-tuning engine, allowing it to act on its own advice in real time; relieving the administrator of having to tune the system manually as they do with most systems today.
Thank you Rob for taking the time to give us information on WebSphere and IBM’s work in that area.