Visual Basic 6 Business Objects
When we discussed the CSLA, in Chapter 2, we didn't dictate which machines were to run any particular part of our application. In this section, we're going to go through the most common physical architectures. We'll explore how we can place the logical tiers of our application on physical machines to provide a rich, interactive user-interface, good application performance, and scalability.
2-Tier Physical Architecture
So far, I've portrayed 2-tier applications as the 'old way' of doing things. In reality, however, this is still the most common physical client/server architecture, so it's important that we look at how we can use the CSLA within this physical 2-tier environment.
In a traditional 2-tier design, each client connects to the data server directly, and the processing is distributed between the data server and each client workstation.
Take a look at the following diagram. On the left, we can see the physical machines that are involved; on the right, we can see the logical layers - next to the machines on which they'll be running:
In this case, we've put virtually all the processing on the client, except for the data processing itself. This is a very typical 2-tier configuration, commonly known as an intelligent client design, since quite a lot of processing occurs on the client workstation.
Intelligent client is just another name for a fat client, but without the politically incorrect overtones.
This approach makes the most use of the horsepower on the user's desktop, relying on the central database only for data services.
Just how much processing is performed on the server can have a great impact on the overall performance of the system. In many cases, the data processing can add up to a lot of work, and if the data server is being used by a great many clients then it can actually become a performance bottleneck. By moving most of the processing to the clients, we can help reduce the load on the data server.
Of course, the more processing that's moved to the client, the more data is typically brought from the server to the client to be processed. For example, suppose we want to get a list of customers in a given zip code and whose last name starts with the letter 'L'. We could have the server figure out which customers match the criteria and send over the result, or we could send over details of all the customers and have the client figure it out.
Here's the point: the more processing we move to the client, the more load we tend to put on the network. Of course, if we have no processing on the client, we might also have to send a lot of material over the network - since the server becomes responsible for creating each screen that's seen by the user, and those screens need to be sent across to the client.
Ideally, we can find a balance between the processing on the server, the processing on the client, and the load on the network, which uses each to their best advantage.
2-Tier with Centralized Processing
Traditional 2-tier architectures lack one very important feature. Typically, no business logic runs on the server: it's all located in the clients. Certainly, there may be a fair amount of work done on the server to provide data services, but the bulk of the business processing is almost always limited to the clients.
Even in a 2-tier setting, it would be very nice if we could put some services on our database server to provide shared or centralized processing. If our data server can support the extra processing, and it's running Windows, then we can most certainly design our application to match the following diagram:
With this approach, we have objects that are running on a central server machine. This means that they can easily interact with each other, allowing us to create shared services, such as a message service, that allow a business object to send messages to other business objects on other client workstations.
This model might also reduce the network traffic. Our data-centric business objects can do a lot of preprocessing on the data from the database before sending it over the network - so we can reduce the load on the network.
Another benefit to this approach is that it means the database doesn't have to be a SQL database. Since the application service objects sit between the database and the actual business objects on the clients, they can allow us to use a much less sophisticated database in a very efficient manner. This could also allow us to tap into multiple data sources, and it would be entirely transparent to the code running on the client workstations.
For instance, our application may need to get at simple text files on a central server - maybe hundreds or thousands of such files. We could create an application service object that sent the data from those files to the business objects on the client workstations and updated the data when it was sent back. From the business object's viewpoint, there would be no way of knowing whether the data came from text files or from memo fields in a complex SQL database.
Of course, this model puts a lot of extra processing on the database server. Before jumping on this as a perfect solution, we would need to evaluate whether that machine could handle both the data processing and the application service processing without becoming a bottleneck.
3-Tier Physical Architecture
In a 3-tier design, the client workstations communicate with an application server, and the application server communicates with the data server that holds the database.
This is a very powerful and flexible way to set things up. If we need to add another database, we can do so without changing the clients. If we need more performance, we just add another application server - again with little or no change to the clients.
In the following diagram, the physical machines are on the left, and the various tiers of the application on the right. You can see which parts of the application would be running on the different machines:
Client: Presentation and UI-Centric Business Objects
With this approach, both the presentation layer and some objects are placed on the client workstations. This may appear a bit unusual, since it's commonly thought that the business objects go on the central application server machine.
In an ideal world, keeping all the business processing on the application server would make sense. This would put all the business rules in a central place where they would be easy to maintain. As we discussed in Chapter 2, however, this typically leads to a batch-oriented user-interface rather than the rich interface most users desire.
By moving all our processing off the client workstations, we're also failing to make good use of the powerful computers on the users' desktops. Most companies have invested immense amounts of money to provide their users with powerful desktop computers. It seems counterproductive to ignore the processing potential of these machines by moving all the applications processing to a central machine.
Another view might be that the objects should be on both the client and the server, moving back and forth as required. This might seem better still, since the processing could move to the machine where it was best suited at any given time. This is the basic premise of the CSLA, where we've split each business object between its UI-centric and data-centric behaviors - effectively putting half the object on the client and half on the application server.
Unfortunately, Visual Basic has no innate ability to move objects from machine to machine. From Visual Basic's perspective, once an object has been created on a machine, that's where it stays. This means we need to come up with an effective way of moving our objects back and forth between the client and the application server. We'll look at some powerful techniques for handling this later in the chapter.
Performance issues play a large role in deciding where we should place each part of our application. Let's look at our 3-tier physical model and see how the tiers of our application will most likely communicate:
When we're using ActiveX servers like the ones we create with Visual Basic, the client workstations typically communicate with the application server through Microsoft's Distributed Component Object Model (DCOM). Our data-centric objects will most likely use OLE DB or ODBC to communicate with the database server, although this is certainly not always the case.
When we're working with DCOM, we have to consider some important performance issues. As we go through this chapter, we'll look at various design and coding techniques that we can use to achieve an excellent performance. In general, however, due to the way DCOM works, we don't want to have a lot of communication going on across the network. The problem is not that DCOM is slow, or less powerful than other network communication alternatives, such as Distributed Computing Environment (DCE) or Common Object Request Broker Architecture (CORBA). The bottom line is that this problem happens to be common to all of these related technologies.
It is always important to minimize network traffic and calls to objects or procedures on other machines.
Regardless of performance arguments, we should always keep our objects phyically close to whatever it is that the objects interact with the most. The user-interface should be close to the user, and the data processing should be close to the data. This means that the user-interface should be on the client, and the data services should be on the data server. By keeping the objects in the right place we can avoid network communication and gain a lot of performance.
Our UI-centric business objects primarily interact with the user-interface, so they belong as close to that interface as possible. After all, the user-interface is constantly communicating with the business objects to set and retrieve properties and to call methods. Every now and then, a business object talks to its data-centric counterpart, but the vast bulk of the interaction is with the user-interface.
Application Server: Data-Centric Business Objects
As we discussed in the last section, the data-centric business objects run on the application server. Typically, these objects will communicate with the database server using OLE DB or possibly ODBC:
We'll probably use Active Data Objects (ADO) to interact with OLE DB. If we are using a common relational database such as SQL Server or Oracle we may use any one of the database technologies available within Visual Basic 6.0. The most common technologies include:
- ActiveX Data Objects (ADO)
- Remote Data Objects (RDO)
In general, ADO is the preferred data access technologies. All of the other data technologies (RDO, ODBC or ODBCDirect) continue to be supported, but Microsoft is putting their development efforts entirely toward improving and enhancing ADO (and OLEDB, its underlying technology). The version of ADO (2.0) included with Visual Basic 6.0 provides relatively comparable or even better performance when compared to technologies such as RDO.
If we choose not to use ADO and we're working with a typical database server, such as Oracle or SQL Server, then RDO is probably the next choice. It provides very good performance, and allows us to tap into many of the features of the database server very easily. RDO is just a thin object layer on top of the ODBC API, so there's very little overhead to degrade performance.
ODBCDirect should be avoided if at all possible, since it is a technology that Microsoft recommends against using. As part of the push toward ADO and OLEDB, ODBCDirect is already considered obsolete.
Something to keep in mind is that the application server can talk to the data server in whatever way really works best. For instance, our application server may use TCP/IP sockets, some form of proprietary DLL, or screen-scraping technology, to interact with the data source:
This illustrates a major benefit to this whole design, and one that a lot of companies can use. If our data is sitting on some computer that's hard to reach, or expensive in terms of licensing or networking, then we can effectively hide the data source behind the application server and still let our UI-centric business objects work with the data, regardless of how we accessed i
Page 2 of 13