October 22, 2016
Hot Topics:

Book Review: Enterprise Application Architecture

  • November 19, 2002
  • By Sam Huggill
  • Send Email »
  • More Articles »

In order to deploy our objects across ourenterprise we need to examine the infrastructure that we have designedfor our enterprise system. What follows is a quick review of how wedivided the system up in the earlier chapters.

Processing Divisions

One of the very first things we talkedabout in this book was how to divide the system by isolating particularprocesses that are going on. We further saw how we could sub-divide theprocessing into major and minor processes.

Major Processing Divisions

In Chapter 2, we determined that therewere two types of major process isolation:

OLTP - On Line Transaction Processing

OLTP is designed for real-timetransaction processing. We expect the OLTP portion of the system torespond immediately to all requests, updates, etc. This system isoptimized (at the database level) for speed first and then ease of usewhen dealing with single items (one employee, one customer, one purchaseetc.). Over time, we have gradually reached the point where we havedriven most of the redundancy out of our well-designed database systems.As an industry we have learned to design and optimize our systems arounda unit of work we call a transaction. The most optimal transaction usesthe smallest possible data set that will get a particular taskaccomplished. The Data objects we learned to design and build in thisbook are optimized for OLTP.

OLAP - On Line Analytical Processing

The OLAP or DSS system is not designedfor real-time transaction processing; it is really something of a datawarehouse. It is optimized in a manner that makes it easy to get answersto, even complex-enterprise-wide, questions in a very short time. Weexpect the OLAP portion of the system to provide us with easy access toanswers for difficult (maybe formerly impossible) questions that spanmultiple (enterprise wide, all the employees, all the customers inFrance, etc.) items. Although we took the time to realize that we doneed to separate the OLAP data storage activities from the OLTP datastorage activities, we did not get a chance to learn to code the specialData objects that handle the OLAP portion of our system.

Minor Processing Divisions

The next thing we discovered was that inaddition to the two major processing divisions in every system, we couldalso identify four minor processing divisions.

Data Storage Processes

These processes are responsible for thephysical storage and retrieval of data from some persistent source. Ifwe design our system correctly, this may be one of the few places wherethe system is actually performing reads and writes to an actual physicaldisk.

Data Manipulation Processes

Data Manipulation Processes are designedto know where an organization's data is stored and know the steps thatmust be taken to retrieve, remove, or change that data. These processesdo not typically perform tasks like disk I/O themselves. They usuallyinvoke other processes, most of them on another physical machine, tohandle the physical writes and reads. The Data Manipulation processesare designed to wrap around the core of data available to ourorganization and give the programmers, and the users, access to theentirety of that data as though the data existed on a single machine. Inreality, that data may exist on a centralized SQL Server machine, but itmay also exist in legacy machines like mainframes, tired old UNIXsystems, in flat files, and maybe even somewhere out on the Internet.

Data/Business Rule Integration Processes

While Data Manipulation processes knowabout the organization's data, the Data/Business Rule Integrationprocesses know about the organization's talent. Their job is to pulltogether data and business rules and by combining the two increase thevalue of both.

Presentation Processes

The Presentation processes' job is topresent or receive data to or from the end user. We found that theseprocesses are really most like the Data Storage processes we looked atabove. The real difference between the Data Storage and Presentation setof processes is where we write the end result of the process. While theData Storage processes primarily write to the system's disks, thePresentation processes primarily write to the end users workstations'screens, files, and printers. But, both sets of process are concernedwith changing fleeting binary data transmissions into something of amore persistent nature.

So splitting the enterprise according toprocessing resulted in this:

Insert Image 1

Physical Divisions - Tiers

We found that it was possible to deploythe available physical resources in a manner that mirrored theprocessing divisions. We split the servers into the following physicaltiers:

  • The Data tier - Handles the DataStorage processes
  • The Data Centric tier - Handles theData Manipulation processes
  • The User Centric tier - Handles theData/Business Rule Integration processes
  • The Presentation tier - Handles thePresentation processes

Although tier divisions is not a newconcept in distributed architecture, we found that when we combinedthis divisional strategy to our processing divisions, we could focusthe processing power of a large number of presentation, objectbroker/transaction processing servers, towards a very much smallernumber of database servers at the center of our system. In otherwords, it made more sense to imagine that the servers existed as aseries of concentric rings:

Insert Image 2

During this time, we came to realizethat at the core of distributed architecture is the very simple ideathat we can employ more than one machine to handle the processing loadfor our enterprise.

Logical Divisions - Spheres

At this time, we also realized that inorder to distribute the processing load across more than one machine,the applications we developed needed to be a little different from whatwe might have grown accustomed to in the past. We found that we neededto learn to think of our applications as the set of related processes wetalked about earlier rather than as a single monolithic block of code.The reasoning for this was simple. It doesn't matter if we have 10, or100, or even 1000 servers available to execute our code. If that codeexists as a monolith, then we can only run that block of code on oneserver at a time. On the other hand, if we break the monolith down intoseveral sets of complimentary logical processes, we can run each of thedifferent processes on different machines simultaneously.

This meant that it was possible to haveboth our physical and logical system set up in a complimentary manner.When we paired this idea with the server farm designs we looked atearlier a new vision of our enterprise emerged. This combining ofphysical and logical assets in a parallel fashion had the curious effectof allowing us to envision our system a series of spheres. Where eachone of the spheres represented a minor processing division:

Insert Image 3

Just like we did with the servers in thephysical system above we placed the smallest sphere, the Data sphere, inour logical system at the center of the enterprise. The reason we didthis was because knew that the power of distributed architecture lay inits ability to share the processing load across as many machines aspossible. We intuitively understood that the distribution of theprocessing load across a system did not mean that we had to distributethe data we stored in that system. We learned to position our availableresources, both hardware and software, in a fashion that focused theenergy of the system towards the core of data at the center of thesystem.


Once we had positioned our Data sphere atthe center of our enterprise, we set out to construct a system thatwould allow us to share the smallest possible number of database serversat the core of our enterprise to the largest number of users. We foundthat we could envision the different spheres, Data sphere, Data Centricsphere, User Centric sphere, and Presentation sphere as a series ofnested spheres. Although I didn't take time to make it too clear as wewere working through the chapters, we started solving our programmingproblems at the center of this system, at the Data sphere.

During the time that we were learning todevelop ECDOs, we were very careful to craft each data object as a setof three distinct sets of processes. We designed one set of processes,the stored procedures, to be executed on the Data sphere. We craftedanother set of processes, those in the DC Object, to be executed on theData Centric sphere. And a third set of processes, given by the UCObject that were destined to be executed on the User Centric sphere. Theend result of this careful design and execution is that we have, in avery real way, moved the data store out to the third sphere in oursystem. Remember that while we were learning to build these dataobjects, we often called them something like a 'proxy for a table' or a'proxy for a view'.


ECDOs are distributed data objects thatinherently spread out the load normally forced upon the databaseservers across as many servers as we have available in the DataCentric and User Centric spheres.

Functional Divisions

Although the physical processingimprovements our distributed object enable are great, we did somethingfar more important with the ECDO when we effectively moved the databaseto the User Centric sphere in the system. We purified our applicationdesign and development practices. While we were learning how to builddistributed data objects, we didn't allow business rules to creep intoour data object designs. We eliminated the business rules from all ofour data handling processes. We took on the elimination of businessrules from our objects with such a fervor that we didn't even begin toconsider the business rules in our applications until very late in ourjourney.

When we did finally begin to think aboutbusiness rules, we ensured that they wouldn't infiltrate our dataobjects by designing a couple of special objects to handle businessrules. We called these special objects Connector objects and Veneers. Welearned that it was possible to distill the business rules from a set ofrequirements and handle all of these rules just using data objects,Connector objects, and Veneers.

This showed us that it was entirelypossible to think about building database applications withoutreally having to think about databases. This means that we can divideour development efforts into two separate challenges. The first is tocreate reusable data objects that can be shared amongst differentapplications. The second is to take the reusable data objects and toincorporate them into unique applications using Connector objects andVeneers:

Insert Image 4

What we can garner from the lastparagraph is that when we start to think about deployment issues for ourenterprise, we really need to think about deployment in two phases:

  • The first phase is the deployment ofinvolves distributing the reusable data objects across the spheresin our system.
  • Once these data objects are availablethroughout the enterprise, then we can begin to think about thedeployment issues on the level of the individual applications.

In the next section of this chapter, weare going to consider the deployment of data objects across the spheresof the system. In order to handle these deployment issues, we need tothink about SQL Server at the Data sphere, Microsoft Transaction Servermanaging transactions at the Data Centric sphere, Microsoft TransactionServer operating as an object broker at the User Centric sphere, andthen Internet Information Server on the Presentation sphere.

Page 2 of 4

Comment and Contribute


(Maximum characters: 1200). You have characters left.



Enterprise Development Update

Don't miss an article. Subscribe to our newsletter below.

Sitemap | Contact Us

Thanks for your registration, follow us on our social networks to keep up-to-date
Rocket Fuel