VoiceReview: Audium 3

Review: Audium 3

Developer.com content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Introduction

In our product review section so far we have reviewed a number of different tools and technologies that are available for the development and testing of VoiceXML-based applications. One of the chief characteristics that we have seen with those tools is that their main focus is to enable editing, creating and testing VoiceXML applications. This review is different, as we’re discussing a dynamic server-side framework which allows developers to create interactive and dynamic server-based applications.

It is quite possible that as developers, we can utilize server-side scripting technologies such as Java Server Pages (JSP), ASP.NET, Perl, PHP, Cold Fusion etc. to generate VoiceXML output. On the other hand, we could use a server-side framework which would allow us to dynamically build a VoiceXML application without even writing VoiceXML dialog logic by hand. The benefits of a server-side framework clearly include:

  • Dynamic generation of VoiceXML
  • Abstracts the complexity involved in creating production quality conversational applications–for instance automatic generation support for error handling, session management, etc.
  • VoiceXML is an evolving standard. For instance, today 1.0–which is considered as a standard today–has a lot of portability issues (it doesn’t specify a grammar format to be supported by VoiceXML implementations) where as 2.0 is still in draft stage with the W3C. Until the time a clear standard evolves (hopefully 2.0 will be available as an official recommendation soon), there exists incompatibility with different implementations of VoiceXML platforms. A server-side framework can maintain this compatibility by building this as part of the logic.
  • Depending on the toolsets available with the server-side framework, business/application logic can be abstracted into modular components and be plugged appropriately.
  • Possibly even generate applications for different user-interfaces: browsers, voice browsers, wireless devices etc.

Audium Concepts

One of the emerging server-side VoiceXML dynamic application platforms is Audium from Audium Corporation. The Audium platform (as illustrated by the picture below–click on the thumbnail to see a larger version of that image) includes the two key components:

  • Audium Builder, which enables a developer to interact with and build/configure the VoiceXML application
  • Audium Server, which includes the following components: Voice Application Manager, Call Service Logic, Audium Dialog Module Inventory, Data & Application Connectivity, Rules Engine and Audit & logging facilities. This is the runtime of the Audium platform and manages the overall dynamic generation of the application, application management, reusable modules and backend connectivity.

Audium currently supports the VoiceXML 1.0 specification and has embedded logic to support the following VoiceXML implementations:

Version Changes

A new version of the product (beta 1, release 3) was available in time for this review. This newer version builds on top of the existing version and the major difference is that the product now supports a Java Swing-based graphical user interface for designing the call flow and setting up the other properties of the application. If you are familiar with Audium, then this new interface replaces the web-based simplified checker board interface.

Availability/Pricing

The current release version of Audium is Version 2. It is available for developers to download from Audium’s website. Pricing is per port of the deployed VoiceXML platform.

Installation

As we have seen in the previous section there are two main components of the Audium product: server and builder. Accordingly, installation of the environment involves two main components:

  • Audium Common Platform & Builder are installed on a file system, typically in a location like "C:Program FilesAudium". This location is also the value of the AUDIUM_HOME environment variable.
  • Audium Web Server Component is available as a WAR (J2EE Web Archive) and needs to be deployed on a J2EE Web Application Server. Currently, Audium only utilizes the Servlet aspect of the J2EE platform and hence can be deployed on a servlet engine. Currently supported application servers include, Apache Tomcat 4.0.1, BEA Weblogic 6.0/6.1 and JBoss. The application server environment used in my review was Apache Tomcat 4.0.1.

First looks – Audium Builder

Now that we have learned about the various components of the platform, let’s start developing dynamic applications. We start with the most visible aspect of the Audium solution, the Audium Builder. Audium Builder (shown in figure below) is the graphical environment for configuring and building VoiceXML applications for the Audium platform. The figure shows a graphical representation of the simple “Hello World” application which includes three main steps: Call Start, and Audio module which utilizes Text-to-Speech to say "Hello World" and Call End.

As illustrated by the previous screenshot, an application in Audium is comprised of set of different components. Known as "Module Inventory," Audium includes a pre-built set of basic speech application modules which can be used as part of the call flow. As shown below, Audium includes modules for forms, menus, playing back audio prompts/text-to-speech, recognizing basic inputs (credit card, currency, date, digit etc.), VoiceXML features such as transfer/record etc. As a developer it is also possible to build you own set of modules as well.

We will look into the process of developing an application using the Audium platform in the section below, but before we go further let’s see how our application executes. If you start your servlet engine (Apache Tomcat here) and then open the following URL in Internet Explorer,http://localhost:8080/Audium/Server?application=HelloWorld, you would get an output similar to this.

Instead of opening the application in Internet Explorer, if you connect this URL with your VoiceXML gateway (externally hosted by a Voice Service Provider or in your company) then you should be able to execute it.

Developing a simple VoiceXML Application – Step by Step

Lets see step-by-step what is involved in building a VoiceXML application, in the "Audium way."

1. Start the Builder
"start %AUDIUM_HOMEBuilderAudiumBuilder.cmd"

2. Select “New Application.”

3. Configure the basic properties of the application. Typically, only the application name needs to be provided. Other properties, such as caching mechanism, timeout, error messages, Voice Browsers supported, user management and logging, are included for the overall management of the application.
4.Open the Call Flow for the Application. The application initializes with the "Call Start" module.
5. Drag an Audio module into the call flow and configure it to play a TTS message "Hello World"

Dynamic Module Configurations

In step 5 of the VoiceXML application design we created using the Audium Builder we introduced the notion of a configuration for a module. The configuration of a module describes the various properties, variables, prompts, etc. that a module provides. In simpler scenario, module configurations can be static, however, to support complex scenarios module configurations may be dynamic. Let’s take an example approach by looking at the Audio module. By default, the audio module supports a set of prompts that can be configured to either play pre-recorded audio or play a text-to-speech message. If we would like to play a static line of text we can just have it be a part of the prompts that are played in the Audio module. However, if components of that prompt are dynamically retrieved from a backend application then we will need to set up the module to have a Dynamic Configuration.

Two mechanisms are supported by Audium to achieve dynamic configuration capability:

  1. Where you get the configuration from a Java class file: In this scenario we create a class which implements the com.audium.server.proxy ModuleInterface Java Interface. This class then should be placed in %AUDIUM_HOME%applicationsApplicationNamejavaapplicationclasses directory.
  2. Where you get the configuration information from a web based HTTP/XML exchange: In this scenario, information about the current VoiceXML session is posted to the HTTP-based loosely-coupled interaction.

To understand dynamic Module Configurations let’s look at an example. Let’s say we would like to build an Employee Locator application. We would like to provide the information (phone numbers, email address) to a caller after recognizing the name of the employee and then transfer the user to a selected phone type (mobile/direct). The call flow for the application is shown below.

Implementing a module configuration is a relatively simple task–it just requires us to implement a single method, getConfig() which returns a ModuleConfig instance. Typically, you can just modify the existing default configuration and add the dynamic functionality.

package com.silverline;import java.util.*; import com.audium.server.proxy.*; import com.audium.server.module.*; import com.audium.server.session.*; import com.audium.server.*; import com.audium.server.xml.*; public class EmployeeModuleConfig implements ModuleInterface {  public ModuleConfig getConfig(String name,    EntityAPI input,    ModuleConfig defaults) throws AudiumException {    ModuleConfig.PromptGroup initial = defaults.getPromptGroup("initial_prompt", 1);   String employee_name = input.getPublicData("EmployeeForm", "selection");// Query the database using JDBC and get the employee // information into the string employee_info_string   initial.insertPrompt(1, defaults.new AudioPrompt(employee_info_string, null));      return defaults;      } }

Rules

Whereas Dynamic Configuration can be used to provide dynamic behavior in a number of scenarios, it is limited to the fact that it can only change the results of the module but can’t change the call-flow itself. In some scenarios, however, we would like the call flow itself to be changed depending on the outcome of a business rule. Audium supports this capability by allowing VoiceXML application developers to implement a "Rule". Similar to a Dynamic Configuration, a rule can also be implemented as a Java class or a HTTP/XML interaction.

To understand how rules are built, let’s say we want to build a customer service application. As a business rule we would like all platinum customers to be treated differently than regular customers. The figure below shows an illustrative call-flow for the same.

To implement a rule using Java, the developer is required to implement the com.audium.server.proxy.RuleInterface Java Interface, as illustrated below.

package com.silverline;import java.util.*; import com.audium.server.proxy.*; import com.audium.server.session.*; import com.audium.server.*; public class CustomerRule implements RuleInterface {  public String doRule(String name, EntityAPI input) throws AudiumException { String customerID = input.getPublicData("CustomerForm", "selection");  String customer_type = CustomerBusinessObject.getCustomerType(customerID);      return customer_type;        } }

Worklets

So Rules are used to change call-flow sequences and Dynamic Configurations are used to change the VoiceXML markup that is generated by Audium. What if you would like to do some background processing without changing the call-flow or the VoiceXML markup? To implement such background processing, Audium provides the concept of Worklets. As always, a worklet can also be implemented as a Java class or a HTTP/XML interaction.

To understand how worklets are developed and used, let’s say we want to build a worklet which audits some of the key interactions with the customer into an Audit Trail database. The figure below shows an illustrative call-flow for the same.

To implement a worklet using Java, the developer is required to implement the com.audium.server.proxy.WorkletInterface Java Interface, as illustrated below.

package com.silverline;import com.audium.server.proxy.*;import com.audium.server.session.*;import com.audium.server.*;import java.util.*public class CustomerAudit implements WorkletInterface { public void doWork(String name,   WorkletAPI input) throws AudiumException {    String customerID = input.getPublicData("CustomerForm", "selection");    Date today = new Date();    AuditTrailDatabase.log(customerID,today);}}

New Modules

In the previous article we had a chance to look at the various set of modules that are provided by Audium. The fact that there are bunch of modules which are available indicates that there must be a mechanism to create custom modules. For instance, if you would like to implement some functionality that your gateway provides but is not implemented as part of the Audium system, such asSpeech Verification. This scenario would require you to build a new module, which requires implementing the abstract Java class com.audium.server.module.ModuleBase. This class is used as the base class for all modules that are used with the Audium server. To create VoiceXML output from the module, Audium Voice Foundation Classes (VFC) should be utilized.

When to Use What?

Dynamic Module Configurations Change the embedded VoiceXML markup without changing the overall call-flow. For instance providing a dynamic prompt.
Rule Change the call-flow without effecting the markup.
Worklet Do some processing in the background; No change to the call-flow; No change to the VoiceXML markup.
Module Create a totally new functionality encapsulated as a reusable module.

Conclusion

It is pretty clear that the Audium product is different from all the other IDEs, development tools and hosted development environments that we have reviewed so far. It actually falls into a class of what I call “server-side dynamic VoiceXML platforms.”

Overall, I believe that there are some good features that exist in the Audium product, especially in the soon-to-be-released 3.0 version, (such as the intuitive call-flow building capability). As with any pre-release product, I did come across some issues which I pointed out to the Audium product engineering team. The capabilities that I believe should be added to the product include: closer integration with external/internal grammar development and integration (specially W3C XML grammars), step-by-step debugging capabilities, distributed development/deployment support (through remote deployment capabilities) and a larger suite of reusable components.

Resources

About Hitesh Seth

Hitesh Seth is Chief Technology Evangelist for Silverline Technologies, a global eBusiness and mobile solutions consulting and integration services firm. He is a columnist on VoiceXML technology in XML Journal and regularly writes for other technology publications including Java Developers Journal and Web Services Journal on technology topics such as J2EE, Microsoft .NET, XML, Wireless Computing, Speech Applications, Web Services & Integration. Hitesh received his Bachelors Degree from the Indian Institute of Technology Kanpur (IITK), India. Feel free to email any comments or suggestions about the articles featured in this column at hks@hiteshseth.com.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories