Microsoft & .NET Visual Basic Visual Basic 6 Business Objects

Visual Basic 6 Business Objects

In Chapter 2, we looked at the logical parts, or tiers, of a client/server application.
In this chapter, we’ll walk through the common physical architectures that we’re likely to
encounter as we develop our applications. We’ll look at how the different logical tiers of
the application fit into each physical model, and we’ll discuss some different options in
each case.

The physical architectures that we’ll discuss include:

  • 2-tier (client workstations and a database server)
  • 3-tier (client workstations, application servers, and database servers)
  • n-tier (traditional or browser-based client, Web server and/or application servers, and database servers)

We’ll also look at some specific design concerns for our business objects. Our
UI-centric business objects need to communicate with a user-interface, and there are some
issues that we need to be aware of when we’re designing our objects to make this work
well.

We’ll discuss how the Component Object Model (COM) can be used by our objects to
communicate with each other. There are some serious performance concerns we need to
consider as we implement our objects and their communications. Fortunately there are a
number of mechanisms we can use to minimize the performance impact and we’ll examine
a number of them.

Additionally, our objects need to be persistent. This means that they must have a way
to be saved and restored from a database. The CSLA (Component-based Scalable Architecture)
provides for this, and in this chapter we’ll get right into the details on how it’s all
done. As ever, what sounds easy enough in principle can be challenging in practice, so
we’ll take a good look at some of the techniques available in Visual Basic to make it fast
and easy to persist objects.

When we discussed the CSLA, in Chapter 2, we didn’t dictate which machines were to run
any particular part of our application. In this section, we’re going to go through the
most common physical architectures. We’ll explore how we can place the logical tiers of
our application on physical machines to provide a rich, interactive user-interface, good
application performance, and scalability.

2-Tier Physical Architecture

So far, I’ve portrayed 2-tier applications as the ‘old way’ of doing things. In
reality, however, this is still the most common physical client/server architecture, so
it’s important that we look at how we can use the CSLA within this physical 2-tier
environment.

Traditional 2-Tier

In a traditional 2-tier design, each client connects to the data server directly, and
the processing is distributed between the data server and each client workstation.

Take a look at the following diagram. On the left, we can see the physical machines
that are involved; on the right, we can see the logical layers – next to the machines on
which they’ll be running:

In this case, we’ve put virtually all the processing on the client, except for the data
processing itself. This is a very typical 2-tier configuration, commonly known as an
intelligent client design, since quite a lot of processing occurs on the client
workstation.

Intelligent client is just another name for a fat client, but without the politically
incorrect overtones.

This approach makes the most use of the horsepower on the user’s desktop, relying on
the central database only for data services.

Just how much processing is performed on the server can have a great impact on the
overall performance of the system. In many cases, the data processing can add up to a lot
of work, and if the data server is being used by a great many clients then it can actually
become a performance bottleneck. By moving most of the processing to the clients, we can
help reduce the load on the data server.

Of course, the more processing that’s moved to the client, the more data is typically
brought from the server to the client to be processed. For example, suppose we want to get
a list of customers in a given zip code and whose last name starts with the letter ‘L’. We
could have the server figure out which customers match the criteria and send over the
result, or we could send over details of all the customers and have the client figure it
out.

Here’s the point: the more processing we move to the client, the more load we tend to
put on the network. Of course, if we have no processing on the client, we might also have
to send a lot of material over the network – since the server becomes responsible for
creating each screen that’s seen by the user, and those screens need to be sent across to
the client.

Ideally, we can find a balance between the processing on the server, the processing on
the client, and the load on the network, which uses each to their best advantage.

2-Tier with Centralized Processing

Traditional 2-tier architectures lack one very important feature. Typically, no
business logic runs on the server: it’s all located in the clients. Certainly, there may
be a fair amount of work done on the server to provide data services, but the bulk of the
business processing is almost always limited to the clients.

Even in a 2-tier setting, it would be very nice if we could put some services on our
database server to provide shared or centralized processing. If our data server can
support the extra processing, and it’s running Windows, then we can most certainly design
our application to match the following diagram:

With this approach, we have objects that are running on a central server machine. This
means that they can easily interact with each other, allowing us to create shared
services, such as a message service, that allow a business object to send messages to
other business objects on other client workstations.

This model might also reduce the network traffic. Our data-centric business objects can
do a lot of preprocessing on the data from the database before sending it over the network
– so we can reduce the load on the network.

Another benefit to this approach is that it means the database doesn’t have to be a SQL
database. Since the application service objects sit between the database and the actual
business objects on the clients, they can allow us to use a much less sophisticated
database in a very efficient manner. This could also allow us to tap into multiple data
sources, and it would be entirely transparent to the code running on the client
workstations.

For instance, our application may need to get at simple text files on a central server
– maybe hundreds or thousands of such files. We could create an application service object
that sent the data from those files to the business objects on the client workstations and
updated the data when it was sent back. From the business object’s viewpoint, there would
be no way of knowing whether the data came from text files or from memo fields in a
complex SQL database.

Of course, this model puts a lot of extra processing on the database server. Before
jumping on this as a perfect solution, we would need to evaluate whether that machine
could handle both the data processing and the application service processing without
becoming a bottleneck.

3-Tier Physical Architecture

In a 3-tier design, the client workstations communicate with an application server, and
the application server communicates with the data server that holds the database.

This is a very powerful and flexible way to set things up. If we need to add another
database, we can do so without changing the clients. If we need more performance, we just
add another application server – again with little or no change to the clients.

In the following diagram, the physical
machines are on the left, and the various tiers of the application on the right. You can
see which parts of the application would be running on the different machines:

Client: Presentation and UI-Centric Business Objects

With this approach, both the presentation layer and some objects are placed on the
client workstations. This may appear a bit unusual, since it’s commonly thought that the
business objects go on the central application server machine.

In an ideal world, keeping all the business processing on the application server would
make sense. This would put all the business rules in a central place where they would be
easy to maintain. As we discussed in Chapter 2, however, this typically leads to a
batch-oriented user-interface rather than the rich interface most users desire.

By moving all our processing off the client workstations, we’re also failing to make
good use of the powerful computers on the users’ desktops. Most companies have invested
immense amounts of money to provide their users with powerful desktop computers. It seems
counterproductive to ignore the processing potential of these machines by moving all the
applications processing to a central machine.

Another view might be that the objects should be on both the client and the server,
moving back and forth as required. This might seem better still, since the processing
could move to the machine where it was best suited at any given time. This is the basic
premise of the CSLA, where we’ve split each business object between its UI-centric and
data-centric behaviors – effectively putting half the object on the client and half on the
application server.

Unfortunately, Visual Basic has no innate ability to move objects from machine to
machine. From Visual Basic’s perspective, once an object has been created on a machine,
that’s where it stays. This means we need to come up with an effective way of moving our
objects back and forth between the client and the application server. We’ll look at some
powerful techniques for handling this later in the chapter.

Performance issues play a large role in deciding where we should place each part of our
application. Let’s look at our 3-tier physical model and see how the tiers of our
application will most likely communicate:

When we’re using ActiveX servers like the ones we create with Visual Basic, the client
workstations typically communicate with the application server through Microsoft’s
Distributed Component Object Model (DCOM). Our data-centric objects will most likely use
OLE DB or ODBC to communicate with the database server, although this is certainly not
always the case.

When we’re working with DCOM, we have to consider some important performance issues. As
we go through this chapter, we’ll look at various design and coding techniques that we can
use to achieve an excellent performance. In general, however, due to the way DCOM works,
we don’t want to have a lot of communication going on across the network. The problem is
not that DCOM is slow, or less powerful than other network communication alternatives,
such as Distributed Computing Environment (DCE) or Common Object Request Broker
Architecture (CORBA). The bottom line is that this problem happens to be common to all of
these related technologies.

It is always important to minimize network traffic and calls to objects or procedures
on other machines.

Regardless of performance arguments, we should always keep our objects phyically close
to whatever it is that the objects interact with the most. The user-interface should be
close to the user, and the data processing should be close to the data. This means that
the user-interface should be on the client, and the data services should be on the data
server. By keeping the objects in the right place we can avoid network communication and
gain a lot of performance.

Our UI-centric business objects primarily interact with the user-interface, so they
belong as close to that interface as possible. After all, the user-interface is constantly
communicating with the business objects to set and retrieve properties and to call
methods. Every now and then, a business object talks to its data-centric counterpart, but
the vast bulk of the interaction is with the user-interface.

Application Server: Data-Centric Business Objects

As we discussed in the last section, the data-centric business objects run on the
application server. Typically, these objects will communicate with the database server
using OLE DB or possibly ODBC:

We’ll probably use Active Data Objects (ADO) to interact with OLE DB. If we are using a
common relational database such as SQL Server or Oracle we may use any one of the database
technologies available within Visual Basic 6.0. The most common technologies include:

  • ActiveX Data Objects (ADO)
  • Remote Data Objects (RDO)
  • ODBCDirect

In general, ADO is the preferred data access technologies. All of the other data
technologies (RDO, ODBC or ODBCDirect) continue to be supported, but Microsoft is putting
their development efforts entirely toward improving and enhancing ADO (and OLEDB, its
underlying technology). The version of ADO (2.0) included with Visual Basic 6.0 provides
relatively comparable or even better performance when compared to technologies such as
RDO.

If we choose not to use ADO and we’re working with a typical database server, such as
Oracle or SQL Server, then RDO is probably the next choice. It provides very good
performance, and allows us to tap into many of the features of the database server very
easily. RDO is just a thin object layer on top of the ODBC API, so there’s very little
overhead to degrade performance.

ODBCDirect should be avoided if at all possible, since it is a technology that
Microsoft recommends against using. As part of the push toward ADO and OLEDB, ODBCDirect
is already considered obsolete.

Something to keep in mind is that the application server can talk to the data server in
whatever way really works best. For instance, our application server may use TCP/IP
sockets, some form of proprietary DLL, or screen-scraping technology, to interact with the
data source:

This illustrates a major benefit to this whole design, and one that a lot of companies
can use. If our data is sitting on some computer that’s hard to reach, or expensive in
terms of licensing or networking, then we can effectively hide the data source behind the
application server and still let our UI-centric business objects work with the data,
regardless of how we accessed i

Now that we’ve looked at a fairly traditional 3-tier physical architecture, let’s
examine a couple of different architectures for Internet/intranet development. These
architectures are not the typical Web browser-based designs that most people are familiar
with. Instead, we’re going to blur the browser approach together with the CSLA to
demonstrate a multi-tier architecture with a browser interface.

Architecture #1

The first architecture we’ll look at is the closest to today’s typical Web development.
On the left is the physical layout, which is very typical of a Web environment. On the
right, though, we’re using our now familiar CSLA, with one exception:

One of the primary goals for Internet development is to keep the client as thin as
possible to provide compatibility across all the different Web browsers out there. This
means that we should avoid putting any processing on the client if at all possible.

Ideally, all we’d ever send to a client would be pure HTML, since that would let any
browser act as a client to our program. Of course, HTML provides no ability to do any
processing of program logic on the client side, and so this provides the ultimate in thin
clients.

Since the Web browser client provides no real processing capabilities, we need a layer
of code to run within Microsoft Internet Information Server (IIS) to act as a surrogate or
proxy user-interface for the business objects.

Similar capabilities are available for other web servers, but we’ll stick with IIS
in this book because it is the easiest to work with from Visual Basic.

We could actually implement this layer using a variety of technologies, but we’ve shown
it here using a new type of project in Visual Basic 6.0: an IIS Application. IIS
Applications provide us with very powerful capabilities when it comes to building
applications on the web server. They are a successor to Active Server Pages (ASP) based
applications, providing similar capabilities, but within the context of a full-blown
Visual Basic application.

Our newly added IIS Application interface layer accepts input from the browser and uses
it to act like a traditional business object client. What this really means is that the
Visual Basic code in this layer takes the place of the Visual Basic forms that we’d
normally be using as an interface to the business objects.

Our IIS Application can access COM objects as easily as any other Visual Basic
application. However, IIS Applications have special capabilities that make it very easy
for us to send HTML out to the user’s browser. Thus, IIS Applications make an excellent
surrogate for a forms-based user interface since we can tap into the power of our
UI-centric business objects and use that information to generate the appropriate interface
for the user.

With this technology, and a good set of components containing business objects, we can
build an application based on business objects and then use IIS Applications to create the
user-interface. We’ll cover this in more detail in Chapter 14, where we’ll build an IIS
Application interface. This interface will use the same underlying business objects as the
Visual Basic form-based interface that we’ll create in Chapter 7 and the Microsoft Excel
interface that we’ll create in Chapter 9.

Architecture #2

The second design that we’ll look at here is very similar to the first, but it’s more
scalable. In this design, we retain the application server from the 3-tier model that we
discussed earlier to offload some of the processing from the web server:

In this diagram, we’ve moved the data-centric business objects off the Web server and
back on to the application server. This can be particularly useful if we have a mixed
environment where we’re providing a browser interface for some users and a Visual Basic
forms interface to others.

The code running in the IIS Application takes the place of the Visual Basic forms in a
more traditional user-interface. This means that any code that would have been behind our
Visual Basic forms, to format data for display, or to modify any user input, will be coded
within the IIS Application – generating HTML to be sent to the user’s browser. Either
way, this code should be pretty minimal, since any actual business logic should always be
placed in the business objects or application services.

If we want to get real fancy, we can use the new DataFormat object capability of Visual
Basic 6.0 to create an ActiveX DLL that contains objects that know how to format our data
for display. These objects could then be used when we’re developing our forms based
interface as well as our IIS Application interface.

The IIS Application also needs to generate the HTML responses for the user, essentially
creating a dynamic display of our data. Since IIS Applications are written in Visual Basic
there’s a very small learning curve to move from traditional Visual Basic development to
developing Web pages using IIS Applications.

COM/DCOM Performance

Most of the physical architectures that we’ve been looking at use DCOM (Distributed
Component Object Model) for communication between machines on the network. But even with
the speed improvements over Remote Automation, DCOM can still be pretty slow. In
particular, there is substantial overhead on a per call basis.

Each time our program calls an object’s property, or method, there’s a speed hit. We
get pretty much the same speed hit regardless of whether our call sends a single byte to
the object or a thousand bytes. Sure, it takes a little longer to send a thousand bytes
than a single byte, but the COM overhead is the same either way – and that overhead is far
from trivial.

Calling Single Properties

From a high-level view, each time we access a property or call a method, COM needs to
find the object that we want to talk to; and then it needs to find the property or method.
Once it’s done all that work, it moves any parameter data over to the other process, and
calls the property or method. Once the call is done, it has to move the results back over
to our process and return the values.

Take the following code, for example:

Set objObject = CreateObject("MyServer.MyClass")
With objObject
.Name = "Mary"
.Hair = "Brown"
.Salary = 31000
End With

This code has four cross-process or cross-network calls (depending on whether MyServer
is on the same machine or across the network). The CreateObject call is remote and has
overhead. Each of the three property calls is also remote, and each has similar overhead.
For three properties, this might not be too bad; but suppose our object had 50 properties,
or suppose that our program was calling these properties over and over in a loop. We’d
soon find this performance totally unacceptable.

Passing Arguments to a Method

Passing multiple arguments to a method, rather than setting individual properties, is
significantly faster. For example:

Set objObject = CreateObject("MyServer.MyClass")
objObject.SetProps "Mary", "Brown", 31000

But too much overhead still remains, because of the way COM and DCOM process the
arguments on this type of call. Furthermore, this technique doesn’t allow us to design our
business objects in the way we discussed in Chapter 3. With this technique, we’d end up
designing our object interfaces around technical limitations.

Serialization of Data

Many programmers have tried the techniques we’ve just seen, and they’ve eventually
given up, saying that COM is too slow to be useful. This is entirely untrue. Like any
other object communication technology, COM provides perfectly acceptable performance, just
as long as we design our applications using an architecture designed to work with it.

Due to COM’s overhead, when we’re designing applications that communicate across
processes or across the network we need to make every effort to minimize the number of
calls between objects. Preferably, we’ll bring the number of calls down to one or two,
with very few parameters on each call.

Instead of setting a series of parameters, or making a method call with a list of
parameters, we should try to design our communication to call a method with a single
parameter that contains all the data we need to send.

There are five main approaches we can take to move large amounts of data in a single
method call:

  • Directly passing user defined types
  • Variant arrays
  • User defined types with the LSet command
  • ADO(R) Recordset with marshalling properties
  • PropertyBag objects

In any case, what we’re doing is serializing the data in our objects. This means that
we’re collecting the data into a single unit that can be efficiently passed to another
object and then pulled out for use by that object.

Directly Passing User Defined Types

Visual Basic 6.0 provides us with a new capability, that of passing user defined types
(UDTs) as parameters – even between different COM servers. This means we can easily
pass structured data from one object to another object, even if the objects are in
different Visual Basic projects, running in different processes or even running on
different computers.

For instance, suppose we create a class named SourceClass in an ActiveX server (DLL or
EXE):

Option Explicit
Public Type SourceProps
Name As String
BirthDate As Date
End Type

Private udtProps As SourceProps
Public Property Let Name(ByVal Value As String)
udtProps.Name = Value
End Property

Public Property Get Name() As String
Name = udtProps.Name
End Property

Public Property Let BirthDate(ByVal Value As Date)
udtProps.BirthDate = Value
End Property

Public Property Get BirthDate() As Date
BirthDate = udtProps.BirthDate
End Property

Public Function GetData() As SourceProps
GetData = udtProps
End Function

This class is fairly straightforward, simply allowing a client to set or retrieve a
couple of property values. Note how the UDT, SourceProps, is declared as Public. This is
important, as declaring it as Public makes the UDT available for use in declaring
variables outside the object. The other interesting bit of code is the GetData function:

Public Function GetData() As SourceProps
GetData = udtProps
End Function

Since the object’s property data is stored in a variable based on a UDT, we can provide
the entire group of property values to another object by allowing it to retrieve the UDT
variable. The GetData function simply returns the entire UDT variable as a result,
providing that functionality.

Now we can create another class named ClientClass:

Option Explicit

Private udtProps As SourceProps
Public Sub PrintData(ByVal Source As SourceClass)
udtProps = Source.GetData
Debug.Print udtProps.Name
Debug.Print udtProps.BirthDate
End Sub

This class simply declares a variable based on the same UDT from our SourceClass. Then
we can retrieve the data in the SourceClass object by using its GetData function. Once
we’ve retrieved the data and stored it in a variable within our new class we can use it as
we desire. In this case we’ve simply printed the values to the Immediate window, but we
could do whatever is appropriate for our application.

This mechanism allows us to pass an object’s data to any other code as a single entity.
By serializing our object’s data this way, we can efficiently pass the data between
processes or even across the network.

Variant Arrays

The Variant data-type is the ultimate in flexibility. A Variant variable can contain
virtually any value of any data-type – including numeric, string, or even an object
reference. As you can imagine, an array of Variants extends that flexibility so that a
single array can contain a collection of values, each of a different data-type.

For instance, consider this code:

Dim vntArray(3) As Variant
vntArray(0) = 22
vntArray(1) = "Fred Jones"
vntArray(2) = 563.22
vntArray(3) = "10/5/98"

Inside the single array variable vntArray we’ve stored four different values, each of a
different type. We can then pass this entire array as a parameter to a procedure:

PrintValues vntArray

In a single call, we’ve passed the entire set of disparate values to a procedure. Since
methods of objects are simply procedures, we could also pass the array to a method:

objMyObject.PrintValues vntArray

The PrintValues procedure or method might look something like this:

Public Sub PrintValues(Values() As Variant)
Dim intIndex As Integer

For intIndex = LBound(Values) To UBound(Values)
Debug.Print Values(intIndex)
Next intIndex

End Sub

This simple code just prints the values from the array to the immediate window in the
Visual Basic development environment. It does, however, illustrate how easy it is to get
at the array data within an object’s method.

Of course, the Variant data-type is highly inefficient compared to the basic
data-types, such as String or Long. Using a Variant variable can be many times slower than
a comparable variable with a basic type. For this reason, we need to be careful about how
and when we use Variant arrays to pass data.

The Variant data-type is generic, meaning that a Variant variable can hold virtually
any piece of data we provide. The downside to this is that, each time we go to use the
variable, Visual Basic needs to check and find out what kind of data it contains. If it
isn’t the right type of data then Visual Basic will try to convert it to a type we can
use. All this adds up to a lot of overhead, and thus our performance can suffer.

If our object’s data is stored in a Variant array, we’ll incur this overhead every time
we use any of our object’s data from that array. The code in most objects work with data
quite a lot, so we really do run the risk of creating objects with poor performance if we
use Variant arrays.

Using GetRows

Many of our objects will need data from a database. If we’re going to use a Variant
array to send this data across the network, we’ll need some way to get the data from the
database into the array. Usually, we’ll get the data from the database in the form of an
ADO Recordset.

Recordset objects provide us with an easy way to copy the database data into a Variant
array. This is done using the GetRows method that’s provided by the object. The GetRows
method simply copies the data from the object into a two-dimensional Variant array.

The following code, for instance, copies the entire result of a query into a Variant
array named vntArray:

Dim vntArray As Variant
rsRecordset.MoveFirst
vntArray = rsRecordset.GetRows(rsRecordset.RecordCount)

vntArray now contains the contents of the recordset as a two-dimensional array. The
first dimension indicates the column or field from the recordset; the second dimension
indicates the row or record:

vntMyValue = vntArray(intColumn, intRecord)

Of course, the subscripts for the column are numeric, so they aren’t as descriptive as
the field name would be if we had access to the actual recordset. Instead of the
following:

vntValue = rsRecordset("MyValue")

We’re reduced to using something like this:

vntValue = vntArray(2)

The order of columns is entirely dependent upon the field order returned in the
recordset. This means that if we add a field to our SQL SELECT statement, in the middle of
other fields that we’re retrieving, then we’ll have to change all of our programs that
rely on Variant arrays to pass data.

Many people store their object’s data in a variable based on a UDT, because it’s a very
concise and convenient way to handle the values. While Visual Basic allows us to pass UDT
variables as parameters, there are some serious drawbacks to this approach.

In particular, we need to make the UDT Public in order to pass it as a parameter. This
feature can’t be used if our objects are not part of an ActiveX server – for
instance from a Standard EXE project. Additionally, this approach makes it very easy for
someone to write a client program that retrieves our object’s data, manipulates it
without our business logic and places it back in the object. Finally, VB passes UDT
variables in a way not supported by other tools or languages. Using this technique
prevents us from passing the data to routines written in other languages or through other
tools such as Microsoft Message Queue.

The ideal would be if we could use a UDT to store our object’s data, but also be
able to retrieve that data as a single stream of data – say in a String variable.

Luckily, there is a very nice solution that enables us to efficiently convert a UDT
variable to a String variable so we can pass its data to another ActiveX component. We’ll
take a good look at this solution, but first let’s put it in perspective.

Background

Many languages have some equivalent to Visual Basic’s UDT. For instance, C has the
struct, and both FORTRAN and Pascal have similar constructs. Some languages natively
support multiply-defined structures, where the programmer can define two different
variable definitions for the same chunk of memory. FORTRAN implements COMMON memory
blocks, while C uses a union keyword within a struct:

Multiply-defined structures are very powerful. They allow us to set up a user-defined
type like this:

Private Type DetailType
Name As String * 30
Age As Integer
End Type

Then we can set up another user-defined type that is exactly as long as DetailType:

Private Type SummaryType
Buffer As String * 31
End Type

Remember that in Win32 environments such as Windows 98 and Windows NT, strings are all
Unicode; therefore, each character in a string will actually consume 2 bytes of memory.
This is so Windows can support more complex character sets, such as those required by
Arabic or languages in the Far East.

So DetailType is a total of 62 bytes in length: 60 for the Name field and 2 for the Age
field. This means that SummaryType also needs to be 62 bytes in length. With Unicode, a 62
byte String is actually half that many characters. Therefore, dividing 62 by 2, we find
that SummaryType needs to be 31 characters long.

It’s worth noting that if DetailType were 61 bytes in length (say Age was a Byte) then
SummaryType would still need to be 31 characters long, since we must round up. If we
didn’t round up, then SummaryType would be just 30 characters long and could only hold 60
of the 61 characters: we’d lose one byte at the end.

FORTAN, C, and some other languages would allow a single variable to be referenced as
DetailType or SummaryType. In other words, they would let us get at the same set of bytes
in memory in more than one way. This means that we could set the Name and Age values with
DetailType, and also treat the memory as a simple String without having to copy any values
in memory.

Since Visual Basic doesn’t allow us to pass a UDT variable as a parameter to an object,
we need some way to convert our UDT variables to a data-type that can be passed. The ideal
situation would be one in which we could simply define the same chunk of memory as both a
UDT and a simple String variable, as shown above.

Although Visual Basic doesn’t allow us to do this, it does provide us with a very
efficient technique that we can use to provide an excellent workaround.

Visual Basic Implementation

Visual Basic’s approach does require a memory copy, but it’s performed with the LSet
command, which is very fast and efficient. Let’s take a look at how LSet works.

Open a Standard EXE project and type in the DetailType and SummaryType UDT’s that we
just looked at. They need to be entered in the General Declarations section of our form.
Then add the following code to the Form_Click event:

Private Sub Form_Click()
Dim udtDetail As DetailType
Dim udtSummary As SummaryType
With udtDetail
.Name = "Fred Jones"
.Age = 23
End With
End Sub

This code simply defines a variable, using each UDT, and then loads some data into the
udtDetail variable, which is based on the DetailType type. So far, this is pretty simple –
so here comes the trick. We’ll add a line using the LSet command:

Private Sub Form_Click()
Dim udtDetail As DetailType
Dim udtSummary As SummaryType
With udtDetail
.Name = "Fred Jones"
.Age = 23
End With
LSet udtSummary = udtDetail
End Sub

This new line uses the LSet command to do a direct memory copy of the contents of
udtDetail into udtSummary. Visual Basic doesn’t perform any type checking here; in fact,
it doesn’t even look at the content of the variables; it just performs a memory copy. This
is very fast and very efficient: substantially faster than trying to copy individual
elements of data, for instance.

The result of this code is that the Name and Age values are stored in the udtSummary
variable, and can be accessed as a string using udtSummary.Buffer. Of course, the values
stored aren’t all printable text, so if we try to print this value we’ll get garbage.
That’s OK though: the objective was to get the data into a single variable. Now we can
pass that string to another procedure using the following code:

Private Sub Form_Click()
Dim udtDetail As DetailType
Dim udtSummary As SummaryType
With udtDetail
.Name = "Fred Jones"
.Age = 23
End With
LSet udtSummary = udtDetail
PrintValues udtSummary.Buffer
End Sub

The PrintValues subroutine just accepts a simple String as a parameter. Of course,
we’re really passing a more complex set of data, but it’s packaged into a simple String at
this point so it’s easy to deal with. Let’s look at the PrintValues routine:

Private Sub PrintValues(Buffer As String)
Dim udtDetail As DetailType
Dim udtSummary As SummaryType
udtSummary.Buffer = Buffer
LSet udtDetail = udtSummary
With udtDetail
Debug.Print .Name
Debug.Print .Age
End With
End Sub

Again, we declare variables using DetailType and SummaryType for use within the
routine, and we copy the parameter value into udtSummary.Buffer. Both values are simple
Strings, so this isn’t a problem. We do need to get at the details of Name and Age,
though, so we use the LSet command to perform a memory copy and get the data into the
udtDetail variable:

LSet udtDetail = udtSummary

Once that’s done, we can simply use udtDetail as normal: in this case, for just
printing the values to the Immediate window in the Visual Basic IDE.

If you run this program, and click on the form, the appropriate output will appear in
the Immediate window accordingly.

Memory Alignment

In most languages, including Visual Basic, the compiler will align certain user-defined
data-types in memory so that they fall on longword boundaries. This means that the
compiler inserts filler space if an element won’t start on an even 4-byte boundary.

This slightly complicates how we determine the length of our string buffer UDT. While
this memory alignment problem adds some complexity to our use of LSet to copy a UDT, it’s
not hard to overcome. Let’s look at the details of the problem, and then we’ll see how
easy it is to solve.

Consider the following user-defined type:

Private Type TestType
B1 As Byte
L1 As Long
End Type

If we assume that the compiler starts the type on a memory boundary, then B1 will be at
the start of a 4-byte boundary. But B1 is only a single byte long, and it’s required that
L1 start on a longword boundary; so that leaves 3 bytes of space, between B1 and L1, which
need to be taken up. This is where the compiler inserts the filler space.

One way to quickly check the actual length (in bytes) of a UDT is to declare a variable
based on the UDT in question and use the LenB function. This function will return the
length, in bytes, of any variable – including one based on a UDT. For instance, we could
write the following code:

Private Sub TypeLength()
Dim udtTest As TestType
MsgBox LenB(udtTest)
End Sub

If we run this subroutine, we get a message box showing the total number of bytes in
the TestType UDT, which in this case is 8.

Not all data-types are longword-aligned. Some are word-aligned, meaning that
they’ll fall on 2-byte boundaries. To accurately determine where the compiler will be
adding filler space, we need to know which data-types will be word-aligned and which will
be longword-aligned. The following data-types are always aligned to a word (2 byte)
boundary:

Byte
Integer
Boolean
String

The following data-types are longword aligned by the compiler:

Long Single
Double Date
Currency Decimal
Object Variant

The compiler will add space as needed in front of any of these data-types, so they
always start on a longword boundary.

When the compiler adds these filler spaces, it makes our UDT that much longer. Of
course, we’re creating another UDT to copy our data into, so we need to know exactly how
long to make that UDT so that it can fit all the data. Here’s our problem: the length of
the string needs to be inflated to include the filler spaces – otherwise it will be too
short, and the last few bytes of data will be lost during the copy.

The following UDT will hold 6 bytes of data:

Private Type StringType
Buffer As String * 3
End Type

At first glance, we might expect StringType to be able to contain the TestType elements
shown above, since a Byte and Long combined are only 5 bytes in length. But because L1
(the Long) needs to be longword-aligned, the compiler inserts 3 filler bytes before L1;
therefore, the total length of TestType is actually 8 bytes. This means that we need the
following UDT to hold all the data from TestType:

Private Type StringType
Buffer As String * 4
End Type

While the issue of longword alignment makes it more complicated to determine the length
of the buffer UDT, it is a predictable behavior, and it isn’t really very hard to ensure
we get the length correct.

ADO 2.0 provides us with some powerful new capabilities that we can use to help us
serialize an object’s data and transfer it from one process to another or across the
network from one computer to another. The core of this capability lies with ADO’s
support for batch mode updating of a Recordset object. With this capability, ADO is able
to not only provide a reference to a Recordset object, but it can actually copy the
object’s data from one process to another or from one machine to another –
essentially allowing us to pass ADO Recordset objects by value rather than by reference.

Couple the ability to move Recordset objects across the network with ADO 2.0’s
support for Recordset objects that are disconnected from any database connection. This
means we can create a Recordset object out of thin air – no database required. We can
define the columns of data we want to provide, then add or manipulate rows of data at
will.

Between these two capabilities, we can use ADO 2.0 to create an arbitrary Recordset
object to store any data we’d like and then pass that Recordset from process to
process or machine to machine as we choose. ADO takes care of all the details of
serializing the Recordset itself, allowing us to simply interact with a Recordset object
to view, update or add our data.

Before we can pass a Recordset around our network we need to create it. There are two
ways to create a Recordset object for serializing our object’s data – creating
the Recordset from a database, or creating a connectionless Recordset through code.

Creating Recordset Objects from Data

The most common way to create a Recordset object is to select some data from a database
to be loaded into the object. However, if we’re going to pass that Recordset around
the network we do need to take some extra steps as we open it.

In particular, we need to set the CursorLocation property of our Connection or
Recordset object must be set to adUseClient. This causes ADO to use a cursor engine
located on the client machine rather than one within the database server itself. Since the
cursor engine is local to the client, we can send the Recordset’s data to any machine
where ADO or ADOR (the light-weight client version of ADO) is installed.

We also need to specify the LockType property of our Recordset as
adLockBatchOptimistic. This causes ADO to build our Recordset object in a batch update
mode, allowing us to manipulate the Recordset and its data even if it is not currently
connected to the data source.

By setting these two properties as we initialize our Recordset object we will cause ADO
to automatically support batch processing of our data, and to automatically pass the
Recordset object’s data to any process that interacts with the object.

There is one caveat to this approach. Our CursorType property can only be one of
adOpenKeyset or adOpenStatic when we are using a batch mode Recordset. If we are passing
the Recordset to a machine that only has ADOR installed (such as a thin client
workstation), then we can only use the adOpenStatic setting for the CursorType property.

Let’s take a look at some code that opens a Recordset and returns the object upon
request:

Public Function GetData() As Recordset
Dim rs As Recordset
Dim strConnect As String
Set rs = New Recordset
strConnect = "Provider=Microsoft.Jet.OLEDB.3.51;" & _
"Persist Security Info=False;Data Source=D:vb6video.mdb"
With rs
.CursorLocation = adUseClientBatch
.Open "select * from customer", strConnect, adOpenStatic, _
adLockBatchOptimistic
End With
Set GetData = rs
Set rs = Nothing
End Function

This code doesn’t look much different than what we’d expect to see any time
we open a Recordset based on some data in a database. However, there are a couple
interesting things to note. First off, before calling the Open method we set the
CursorLocation property of the object:

.CursorLocation = adUseClientBatch

Then, in the call to the Open method we set the LockType to adLockBatchOptimistic. We
also set the CursorType to adOpenStatic. We could have set it to adOpenKeyset if we’d
chosen, but by choosing adOpenStatic we know we can pass the Recordset to a client that
might only have ADOR installed without having ADO convert our cursor type during that
process

Creating a Connectionless Recordset

While we can create Recordset objects from a database, that often won’t work well
for serializing data from our objects. It is not at all unusual for an object’s state
to include information that isn’t necessarily stored in a database. For instance, we
may wish to pass a flag indicating whether our object is new or other types of information
between our client workstation and the application server. If our Recordset object is
generated directly from a database query we are restricted to only passing information
that comes from that database.

Fortunately ADO 2.0 provides a very elegant solution to this problem by allowing us to
create a Recordset object that is totally unrelated to any data source. The steps involved
in this process are quite straightforward:

  • Create the Recordset object
  • Add columns to the Recordset using the Fields object’s Append method
  • Open the Recordset
  • Add or manipulate data in the Recordset

As an example, the following code creates a connectionless Recordset with two columns
of data: Name and BirthDate. We then add a couple rows of information to the Recordset
object and return it as a result of the method:

Public Function MakeData() As Recordset
Dim rs As Recordset
Set rs = New Recordset

With rs
.Fields.Append "Name", adBSTR
.Fields.Append "BirthDate", adDate
.Open
.AddNew
.Fields("Name") = "Fred"
.Fields("BirthDate") = "1/1/88"
.Update
.AddNew
.Fields("Name") = "Mary"
.Fields("BirthDate") = "3/10/68"
.Update
End With

Set MakeData = rs
Set rs = Nothing

End Function

The first couple lines simply declare and create an instance of a Recordset object.
Once that’s done we can move on to adding columns to the empty Recordset by calling
the Append method of the Fields object:

.Fields.Append "Name", adBSTR
.Fields.Append "BirthDate", adDate
After we have columns defined, all that remains is to load our object with some data. This is as simple as calling the AddNew method, loading some data and calling Update to store the data into the Recordset. Of course the data isn’t stored into any database, since this Recordset exists only in our computer’s memory.

Passing a Recordset by Value

We’ve now looked at two different ways to create a Recordset object that we can pass to another process or across the network. Both of the routines shown above are written as Function methods, returning the Recordset object as a result.

ADO itself handles all the details of moving the Recordset object’s data to the client process or computer, so we really don’t need to do any extra work at all beyond setting the properties as we did to create the Recordset. Our client code can be quite simple as shown by the following fragment:

Dim objServer As Object
Dim rs As Recordset
Set objServer = CreateObject("MyDataServer.DataMaker")
Set rs = objServer.GetData
Set objServer = Nothing

Once this code fragment is complete, we have a Recordset object to work with. This code
assumes that the code to create the Recordset is in an ActiveX server named MyDataServer
and in a class named DataMaker. This ActiveX server could be running in another process,
or on another machine on the network.

Regardless, once we’ve got the Recordset through this code, the MyDataServer
ActiveX server can be totally shut down – the machine it is running on could even be
shut off – and our code can still continue to work with the Recordset and its data.

The program running this code fragment does require a reference to either the ADO or
ADOR library in order to function. In many cases the lighter-weight ADOR library is
sufficient, as it provides basic support for interacting with Recordset object’s that
are created and updated by another process or machine.

PropertyBag Objects

A property bag is an object that supports the concept of a key-value pair. The idea is
that the property bag can store a value associated with a key, or name, for that value.
For instance, we might store the value 5 along with the key Height. At any point we can
also retrieve the Height value from the property bag.

Visual Basic 5.0 introduced the concept of a PropertyBag object as part of the ability
to create ActiveX controls. While the concept was useful when storing properties of our
control that the developer set at design time, we couldn’t take advantage of the
PropertyBag object outside of control creation.

Visual Basic 6.0 extends the PropertyBag object such that we can use it anywhere we
choose within our applications. Basically, anywhere that we need to manage key-value
pairs, we can make use of the PropertyBag object provided by Microsoft.

Better still, the PropertyBag object implements a Contents property that allows us to
access the entire contents of the object as a single Byte array – essentially it
provides built-in support for streaming its own data. We can retrieve the data, send the
Byte array as a parameter to another process or across the network, and then place it into
a PropertyBag object, giving us an exact duplicate of the object we started with.

Of course Byte arrays aren’t nearly as easy to manipulate or work with as a String
variable would be. Fortunately this isn’t a serious problem either, as Visual Basic
makes it very easy for us to convert a Byte array to a String and then back to a Byte
array.

Serializing an Object’s Data

Let’s take a look at some simple code that illustrates how we can use a
PropertyBag to serialize the data in an object.

Suppose we’ve got an object with two pieces of data: Name and BirthDate. We can
store this data in a PropertyBag object with code similar to this:

Public Function GetObjectData() As String
Dim pbData As PropertyBag
Set pbData = New PropertyBag

With pbData
.WriteProperty "Name", mstrName
.WriteProperty "BirthDate", mdtBirthDate
End With

End Function

Once we’ve created the PropertyBag object, we can simply use its WriteProperty
method to store the values from our object into the property bag. In this case, we’re
assuming that the name and birth date data are stored in the variables mstrName and
mdtBirthDate.

After our property bag has our object’s data, we can retrieve all the data in a
single Byte array using the PropertyBag object’s Contents property.

Public Function GetObjectData() As String

Dim pbData As PropertyBag

Set pbData = New PropertyBag

With pbData
.WriteProperty "Name", "Fred"
.WriteProperty "BirthDate", "5/14/77"
End With

GetObjectData = pbData.Contents

End Function

With this simple code we’ve converted our object’s data into a single String
variable that we can easily pass as a parameter to another object, even across the
network.

Deserializing an Object’s Data

Now that we’ve seen how we can take data from an object and use a PropertyBag to
serialize that data into a simple String variable, let’s take a look at how we can
use that String variable to load another object with the data.

Since we know we’ll be receiving a String value, the first step is to convert that
String into a Byte array so we can place it into the PropertyBag object’s Contents
property. While we’re doing this, we’ll also need to create a PropertyBag object
to work with:

Public Sub LoadObject(StringBuffer As String)
Dim arData() As Byte
Dim pbData As PropertyBag
Set pbData = New PropertyBag
arData = StringBuffer
pbData.Contents = arData
End Sub

Once we’ve converted the String to a Byte array, we simply set the Contents
property of our PropertyBag using that value. This causes the PropertyBag object to
contain the exact data that was contained in the other PropertyBag object that we used to
create the String variable.

Now that the PropertyBag object has been populated we can use the ReadProperty method
to retrieve the individual data values for use by our object:

Public Sub LoadObject(StringBuffer As String)
Dim arData() As Byte
Dim pbData As PropertyBag
Set pbData = New PropertyBag
arData = StringBuffer
pbData.Contents = arData
With pbData
mstrName = .ReadProperty("Name")
mdtName = .ReadProperty("BirthDate")
End With
End Sub

In many ways the use of a PropertyBag object for serializing our object’s data is
comparable to how we used the LSet command to convert a UDT to a String. Either approach
results in our object’s data being converted into a single String value that we can
pass as a parameter, store in a database or send as the body of an email or Microsoft
Message Queue (MSMQ) message. Once the data reaches the other end of its journey we can
easily reconstitute our object by converting the String value back into its original form.

So far, we've looked at how the logical tiers of our application can be deployed across
various physical machines. We've also looked at the performance ramifications of using COM
and Distributed COM to communicate between our objects.

In this section, we'll go through a number of key concepts that are critical to our
business objects and the creation of Visual Basic user-interfaces to our objects. Here are
the major themes we'll be considering:

  • The UI as a business object client
  • Field-level validation
  • Object-level validation
  • Canceled edits
  • Protecting properties and methods

In the sections that follow, we'll be developing a small project to help us discuss
these main themes. And in Chapters 5 through 7, where we'll expand upon this discussion,
we'll implement a full set of major business objects and a corresponding user-interface
that will become the core of our video rental store project.

The UI as a Business Object Client

One of the primary tenants of the CSLA
is that the user-interface will rely on our business objects to provide all the business
rules and business processing. This implies that there will be a fair amount of
communication between our UI-centric business objects and the user-interface itself:

In the CSLA, the UI-centric business objects typically reside on the same machine as
the user-interface. In the first section of this chapter, we looked at 2-tier, 3-tier, and
Internet physical architectures - so we've seen where the Presentation tier and UI-centric
objects may reside.

In a conventional client/server setting (2-tier or 3-tier), we can put these
Presentation tier objects and UI-centric objects directly on the client workstations. If
we were developing a browser-based user-interface, we'd put these objects on a Web server
using an IIS Application.

The following figure shows how all the logical tiers of our application can be deployed
across the physical machines in both a 3-tier and Internet scenario:

By keeping the UI-centric business objects as close to the user-interface as possible,
we can make the development process as easy as possible for the UI developer. If the
UI-centric objects are right there on the client machine then they're easy and safe to
work with. This means quick development and a higher quality application.

Most larger projects will be developed in a team setting. Typically, the team will be
divided into developers who manage the data, developers who construct the business
objects, and developers who develop the user interface (UI) by using the business objects.
The people in each of these groups need different skills and a different focus.

The UI developers are those who use the business objects. Ideally, they don't need to
understand OO analysis or design; nor do they need to have full knowledge of the
complexities of DCOM, network traffic, or implementing inheritance with Visual Basic.
Instead, these people should have skills geared toward creating an intuitive
user-interface. Their focus is on providing the user with the best possible experience
when using the application.

Designing Our Objects to Support the UI

It's our job, as business object designers and builders, to make the UI developer's job
as easy as possible. We need to design our business objects so that they are powerful and
provide all the features and functionality that the UI developer will need. At the same
time, our objects have to be robust and self-protecting.

If an object can be misused, it will be misused. If an object can be created outside of
its proper place in an object hierarchy, it will be. In short, if our business objects
don't protect themselves then the resulting application will be buggy.

What this really boils down to is that the UI is just a client of the business objects.
The business objects are the core of the application, and the UI is secondary. This makes
it very easy to change the UI without risking the business logic stored in the business
objects.

It also means that we can change our business logic in one place and, by simply
upgrading the ActiveX components that contain our business objects, we can change the
application's behavior. Often this can be done with little or no impact on the
user-interface itself.

We need to make sure that our objects provide sufficient functionality so that we don't
limit the capabilities of the UI. At the same time, we need to make sure that our
functionality is not tied to any single UI or interface concept. If we compromise our
objects for a specific UI then we set a precedence that might mean we'll change our
objects any time the UI is changed - and that is exactly what we want to avoid.

Ideally, we should design our UI-centric business objects so they provide a robust and
consistent set of services for use by the user-interface. It isn't enough that the objects
simply represent real-world entities; they also need to make it easy for the UI developer
to create rich and interactive user-interfaces.

Business Behaviors vs. UI Behaviors

The business-related behaviors of each object will vary - depending on the real-world
entity that each object represents. At the same time, we should be able to provide a
consistent set of behaviors, across all our business objects, to support the
user-interface. We don't want the UI developer to have to learn a whole new set of rules
to work with each and every business object. Instead, we want all our business objects to
basically behave in the same manner.

As we've already seen, our objects provide an interface that is composed of properties,
methods, and events that represent real-world objects. This is sufficient to create a
simple UI that just lets the user enter information in the hope that it's correct.
However, there are a couple other types of interaction that a UI will need if we want to
provide more feedback to the user. These include raising errors and having some way to
indicate when an object is in a valid state.

We also need to take steps to ensure that the UI developer can't easily break our
objects by calling inappropriate methods or setting inappropriate properties. And we need
to prevent the UI from creating or accessing objects that need to be protected.

A Basic Object and User-Interface

We've just covered a number of concepts that are very important if we're going to
provide a consistent set of behaviors to the users of our objects. To illustrate these
concepts, we'll use a simple example. This will let us try out each concept in turn.

The Person Class

For our example, let's create a single class that represents a person. It's going to be
easy to work with this class, because we can all understand the properties and behaviors
that might make up a Person object. This class also makes a good example for us because
there isn't very much difference between a person, a customer, and an employee - and the
latter two are both typical business objects that we might need to create.

Defining the Person Object's Interface

We won't get too carried away with the attributes of a person. Certainly we could list
hundreds, if not thousands, of attributes, but we'll limit the list to four:

  • SSN (social security number)
  • Name
  • Birthdate
  • Age

In Chapters 1 and 3, we discussed events and how they could be used to allow our
objects to indicate when certain things happened. In our Person object, we'll implement an
event to indicate when the person's age has been changed:

  • NewAge

As we'll see, this event will need to be fired any time the BirthDate property is set,
since that is when the person will get a new Age property value.

Setting up the Project

If we're going to do some coding, we'll need to set up a project in Visual Basic where
we can work. Following the CSLA, we'll actually want to create two different projects: one
for our business object (Person), and one for the user-interface.

Putting the UI in a separate project from the business object will help enforce the
separation between the presentation and business tiers. The UI itself will be in a program
that the user can run, just like any other program we'd normally create. For our business
objects, we'll use an ActiveX DLL, since we don't want the overhead of running a whole
other EXE for our Person object.

Right now, we're going to concentrate on the Person object. We'll create a project for
the UI once we're done with the Person object. So create a new ActiveX DLL project, and
change the name of the project to PersonObjects using its Properties window.

Coding the Person Object

Once we've opened our new PersonObjects project, we should see the code window for
Class1. Use the Properties window for the class to change the name of the class to Person.

Here is the code for the Person class:

Option Explicit
Event NewAge()
Private Type PersonProps
SSN As String * 11
Name As String * 50
Birthdate As Date
End Type
Private intAge As Integer
Private udtPerson As PersonProps

Public Property Let SSN(Value As String)
udtPerson.SSN = Value
End Property

Public Property Get SSN() As String
SSN = Trim$(udtPerson.SSN)
End Property

Public Property Let Name(Value As String)
udtPerson.Name = Value
End Property

Public Property Get Name() As String
Name = Trim$(udtPerson.Name)
End Property

Public Property Let Birthdate(Value As Date)
Static intOldAge As Integer
udtPerson.Birthdate = Value

CalculateAge

If intAge <> intOldAge Then
intOldAge = intAge
RaiseEvent NewAge
End If
End Property

Public Property Get Birthdate() As Date
Birthdate = udtPerson.Birthdate
End Property

Public Property Get Age() As Integer
Age = intAge
End Property

Private Sub CalculateAge()
If DatePart("y", udtPerson.Birthdate) > DatePart("y", Now) Then
intAge = DateDiff("yyyy", udtPerson.Birthdate, Now) - 1
Else
intAge = DateDiff("yyyy", udtPerson.Birthdate, Now)
End If
End Sub

This class contains a very straightforward set of code. We just accept the values for
properties and return them when requested. The exception being that we call the
CalculateAge method to recalculate the intAge value each time the Birthdate property is
set, so that it can be returned in the read-only Age property.

Also notice that we have code to raise the NewAge event in the Property Let routine for
the BirthDate property. When we get a new birth date for the person, we always need to
check to see if we've changed the person's age. If we have changed their age, we can raise
this event to indicate to our client program that the age is now different.

That's all there is to this class, for the moment. We'll expand on its functionality
shortly, but right now we need make sure that we can create a user-interface to work with
this simple version.

Adding the Project

Let's put together a very simple form to act as an interface for a Person object. We'll
build this in a new Standard EXE project called PersonDemo. We can add this project to the
same Visual Basic session that we used to build the PersonObjects project: simply choose
the File-Add Project... option, rather than File-New Project, and choose Standard EXE.
Don't forget to change the name of this second project to PersonDemo using its Properties
window.

When we run our project with other projects loaded inside the IDE, Visual Basic will
break us into the debugger in any of the projects that are loaded, making this a very
attractive feature for debugging. It hasn't always been this easy - in Visual Basic 4.0,
we would have had to run a copy of Visual Basic for each project we wanted to debug
interactively.

Creating the Form

Select Form1 and change its name to EditPerson. Here’s what we’ll want this
form to look like:

The form has three text boxes so that the user can supply values for the social
security number (txtSSN), name (txtName), and birth date (txtBirthDate). It also has a
label set up to display the person's age (lblAge), as well as standard OK, Cancel and
Apply buttons (cmdOK, cmdCancel and cmdApply). Set the Enabled properties of cmdOK and
cmdApply to False, and set the BorderStyle property of lblAge to 1 - Fixed Single.

Referencing the PersonObjects Project

Before we start putting code behind the form, we need to make sure our new project has
access to our Person class. Even though both projects are running within the same Visual
Basic window, we still need to add a reference from our UI project back to the ActiveX DLL
we created.

To do this, we need to choose the Project-References menu option to bring up the
References dialog window. Then find the entry for PersonObjects and check the box to the
left:

When we click OK, we'll establish a reference from our UI project back to our
in-process server, PersonObjects. This will give our program access to all the public
classes in that project. In this case, we'll get access to the Person class.

Adding the Code

Now let's put some code behind the Edit Person form:

Option Explicit
Private WithEvents objPerson As Person
Private Sub cmdApply_Click()
' save the object
End Sub

Private Sub cmdCancel_Click()
' do not save the object
Unload Me
End Sub

Private Sub cmdOK_Click()
' save the object
Unload Me
End Sub

Private Sub Form_Load()
Set objPerson = New Person
End Sub

Private Sub objPerson_NewAge()
lblAge = objPerson.Age
End Sub

Private Sub txtBirthdate_Change()
If IsDate(txtBirthdate) Then objPerson.Birthdate = txtBirthdate
End Sub

Private Sub txtName_Change()
objPerson.Name = txtName
End Sub

Private Sub txtSSN_Change()
objPerson.SSN = txtSSN
End Sub

There's nothing terribly complex going on here.

At the top, we declare a variable to hold our Person object and then, in the Form_Load
event, we create a Person object and store a reference to it in the objPerson variable.
The only point worth making, here, is that we're using the WithEvents keyword as we
declare the objPerson variable. This is what allows our program to receive the NewAge
event that the Person object might raise. We don't have to use the WithEvents keyword, but
without it we won't be able to receive the event.

If we do receive a NewAge event, it will be handled in the objPerson_NewAge routine. In
this case, we want to update the age value that's displayed in lblAge with the
objPerson.Age property.

Each text box has a Change event subroutine that simply takes the value from the
control and puts it into the corresponding property of our Person object. We are trusting
the business object, Person, to handle any validation or business rules, so there's no
reason to worry about that here in the UI.

While we could use the new Validate event that was introduced in Visual Basic 6.0, the
Change event allows us to apply our business logic as the user enters each keystroke,
providing the richest possible feedback to the user. Additionally, the Validate event
suffers from the same limitation as the LostFocus event, in that it won’t be fired if
the user presses Enter to take the default button action or Escape to take the cancel
button action. The Change event is the only way to ensure beyond all doubt that our
business logic is applied to each field as the data is entered by the user.

The OK, Cancel and Apply button code is intentionally vague. How we support these
buttons from within our business object is tricky. The solution we're going to see is both
elegant and powerful, but we'll need to beef up our Person object a bit before we're ready
to cover it.

Running the Program

Since it is from the PersonDemo project that our UI calls the business objects, we need
to make sure that it's from the PersonDemo project that the program starts to run. To set
this up, select the PersonDemo project in the Project Group window, right-click the mouse
to get the context menu, and select Set as Start Up. Now, when we run the program, Visual
Basic will begin execution within the PersonDemo project.

At this point, we should be able to choose the Run-Start menu option or press F5 to run
the program and interact with our Person object.

This seems like a pretty nice, straightforward solution. However, we'll soon find that
we have a couple problems with our implementation, so let's take a look at them.

Enforcing Field-Level Validation

Looking at the Person class, we see that it should only store 11 characters for the SSN
and 50 characters for the Name - at least according to our user-defined type, PersonProps:

Private Type PersonProps
SSN As String * 11
Name As String * 50
Birthdate As Date
Age As Integer
End Type

Here's the situation: if we run the program and enter text into the SSN or Name fields
on the form, we'll find that we can enter as many characters as we like. This is obviously
a problem.

While we could simply set the MaxLength property on the form's fields, we haven't
solved the underlying problem. After all, suppose the next interface is an Excel
spreadsheet, where we don't have a MaxLength property. This is a clear case where the
object needs to protect itself.

Raising an Error from the Person Object

The easiest way for the object to indicate that it has a problem with a value is just
to raise an error. So let's alter the SSN Property Let in our Person class as follows:

Public Property Let SSN(Value As String)
If Len(Value) > 11 Then _
Err.Raise vbObjectError + 1001, "Person", "SSN too long"
udtPerson.SSN = Value
End Property

Now the form will be notified if the user's entry is invalid. We might still want to
set the MaxLength property on the form's field but, if we don't, then at least there will
be some indication that there was a problem - and the object will have protected itself.

Trapping the Error in the UI

Of course, the form will need some extra code to handle the error that's raised. How
that error is handled is up to the individual UI designer. It could be handled as simply
as sounding a beep and resetting the displayed value. Add this code to the txtSSN_Change
routine in the EditPerson form:

Private Sub txtSSN_Change()
On Error GoTo HandleError
objPerson.SSN = txtSSN
Exit Sub
HandleError:
Beep
txtSSN = objPerson.SSN
txtSSN.SelStart = Len(txtSSN)
End Sub

Since we're handling our own error
situation here, it's necessary to set the Visual Basic environment to break only on those
errors that we aren't handling ourselves. To set this up, we need to make sure that the
Visual Basic environment will only Break on Unhandled Errors. In Visual Basic, choose the
Tools-Options dialog, select the General tab, and make sure that Break on Unhandled Errors
is selected.

Also note that we didn't use On Error Resume Next, but instead used a labeled error
handler. With Visual Basic 5.0 and later, this is an important performance consideration,
since the native code compiler adds a lot of extra code to support On Error Resume Next,
so a labeled error handler is much more efficient.

Running the Program

At this point, we should be able to run our program and see how this works. Go to the
SSN field and try to enter a value longer than 11 characters. The code we just added
should prevent us from entering any value longer than 11 characters.

The beauty of this solution is that the code in the form only deals with the
presentation; it doesn't enforce any business rules. We could add extra checks in the
Person class to prevent entry of alpha characters in the field, or change the maximum
length, and there would be no impact on the code in the form.

Enforcing Object-Level Validation

By raising errors when a field is given an invalid value, we've provided field-level
validation through our business object. It's also important to provide object-level
validation to the largest degree possible.

Object-level validation checks the entire object - all the properties and internal
variables - to make sure that everything is acceptable and all business rules have been
met. This is important to the UI developer, since they may want to disable the OK and
Apply buttons when the object is not valid and cannot be saved.

Object-level validation is more difficult to implement than field-level validation,
since it depends upon more factors. An object might be valid only when all fields have
values filled in; or perhaps a field is required only when another field contains a
specific value. Virtually anything is possible, since the rules are dictated by the
business requirements for the object.

Worse still, some object-level validation may not be possible in the business object on
the client. Some business rules might really be relational rules in the database, so it's
possible that we won't know they aren't being met until we try to save the object's data.

Granted, we can't guarantee that an object is valid according to rules enforced by the
application services; but that shouldn't stop us from indicating whether the object meets
all the conditions that can be checked right there on the client workstation. If we do
that, then at least the UI can disable the OK and Apply buttons most of the time it's
actually appropriate to do so, and we'll be moving towards a better interface.

An IsValid Property

To provide this functionality for the UI, let's just add a single Boolean property to
the Person object, called IsValid. This property will return a True value if all the
business rules are met (or at least, as far as our object can tell that they've all been
met).

If our object has an IsValid property, then the form can check to see if the object is
valid at any point. If the IsValid property returns False then the form knows that the
object has at least one broken business rule, and the OK and Apply buttons can be
disabled.

This implies that the IsValid property's value is based on all the business rules in
the object. That can make things pretty complex, since an object might have a lot of
business rules to check. For instance, we may have rules specifying which properties are
required. We may also have rules that specify that a field must be blank if another has a
value - or just about anything else we can think up.

There are a number of ways to implement an IsValid property. For instance, we could
code all the business rules into the IsValid routine itself. Then, every time the property
was checked, we'd just run through a series of checks to make sure all the conditions were
met. This approach can be a serious performance problem, however, if we have a lot of
rules - or if some of our rules are complex and hard to check.

Another possibility is to keep a Private variable to keep track of the number of rules
that are broken at any given time. As property values change, we can check all the
appropriate rules and change this counter as needed. This solution can provide better
performance, but can be very difficult to implement.

In particular, when we check a rule we have no way of knowing if it was already broken
or if it's newly broken due to some new data. If it was already broken then we don't want
to increment our rule-broken count; but if it's a newly broken rule, then the rule-broken
count needs to be upped by one.

One very good way to implement the IsValid property is to keep a collection of the
rules that are broken within the object. If there are no broken rules in our collection,
then we know the object is valid. As we check each rule, we can also use the collection to
track whether the rule was already broken, is broken now, or is unbroken. This is a very
useful concept, and one that we'll use in Chapter 5 - where we implement quite a number of
business objects.

To make all this easier, let's create a BrokenRules class to help manage this
collection for us. We'll then be able to use this class whenever we need to implement an
object-level IsValid property in our business objects. We'll certainly be seeing it in
action when we implement the objects for our video rental store application, in Chapter 5.

The BrokenRules Class

The BrokenRules class has one purpose, and that is to make it easy for our business
objects to keep track of their business rules and how many are broken at any given time.
If there are no broken business rules then our business object can return a True value
from its IsValid property.

Creating the BrokenRules Class

Since this class will be used exclusively by our business objects as they keep track of
their broken business rules, we'll need to add the BrokenRules class module to any
business object projects we may be developing.

For instance, in the Person project that we've been developing in this chapter, we will
need to add the BrokenRules class to our PersonObjects project - since that's where our
business object resides in this example.

So add a new class module to the PersonObjects project, using the Project-Add Class
Module menu option, and change the name of the new class module to BrokenRules using its
Properties window.

Go ahead and enter the following code for the BrokenRules class, and then we'll walk
through how it works.

Option Explicit
Event BrokenRule()
Event NoBrokenRules()
Private colBroken As Collection
Private Sub Class_Initialize()
Set colBroken = New Collection
End Sub

Public Sub RuleBroken(Rule As String, IsBroken As Boolean)
On Error GoTo HandleError
If IsBroken Then
colBroken.Add True, Rule
RaiseEvent BrokenRule
Else
colBroken.Remove Rule
If colBroken.Count = 0 Then RaiseEvent NoBrokenRules
End If
HandleError:
End Sub

Public Property Get Count() As Integer
Count = colBroken.Count
End Property

Once you've entered this code, make
sure you save it, because we'll be using it throughout the development of our video rental
project.

Our BrokenRules class first declares a collection variable and two events:

Event BrokenRule()
Event NoBrokenRules()
Private colBroken As Collection

The colBroken collection will be used to store exactly which rules have been broken;
meanwhile, the events that we've declared will be raised when a rule is broken or when the
broken-rule count goes to zero.

The real work in the class is done in the RuleBroken routine:

Public Sub RuleBroken(Rule As String, IsBroken As Boolean)
On Error GoTo HandleError
If IsBroken Then
colBroken.Add True, Rule
RaiseEvent BrokenRule
Else
colBroken.Remove Rule
If colBroken.Count = 0 Then RaiseEvent NoBrokenRules
End If
HandleError:
End Sub

The calling code, in our Person object, just passes this routine a label for the rule
and a Boolean flag to indicate whether the rule was broken or not. Then, we just check
that Boolean flag: if it's True then the rule was broken, so we make sure it's in the
collection:

If IsBroken Then
colBroken.Add True, Rule
RaiseEvent BrokenRule

If it is already in the collection, we'll get an error and exit the routine via the
HandleError label; but if it isn't already there, then we'll not only add it to the
collection but also raise the BrokenRule event so the calling program knows that at least
one rule has been broken.

Likewise, if IsBroken is False then we'll remove the entry from the collection:

Else
colBroken.Remove Rule
If colBroken.Count = 0 Then RaiseEvent NoBrokenRules

If the entry is not in the collection, an error will occur and we'll exit the routine
via the HandleError label. If the entry was in the collection, then we'll see if the
overall count of broken rules is down to zero. When the count reaches zero, the
NoBrokenRules event will be fired so the calling code can tell that everything is valid.

Of course, events aren't universally supported: we can't get them just anywhere within
Visual Basic itself, and we can't necessarily use them in all other environments. Other
environments can still use this class, but if they don't support events then they'll need
to check the Count property for these event conditions: when Count is zero, there are no
broken rules and everything should be valid.

Using the BrokenRules Object within Our Person Object

Now let's see how we can use this BrokenRules class in our Person object. To keep this
fairly simple, let's just enforce a rule that states that the SSN field is required and
must be exactly 11 characters in length. We've already implemented code to prevent the
user from entering more than 11 characters in this field; but now we're making the rules
even more restrictive.

In order to use the BrokenRules object within our Person object, we need to declare and
create a Private variable to hold the new object. Add the following line of code to the
General Declarations section of our Person class module:

Private WithEvents objValid As BrokenRules

We also need to add code in the Person object's Class_Initialize routine to create an
object for this variable:

Private Sub Class_Initialize()
Set objValid = New BrokenRules
objValid.RuleBroken "SSN", True
End Sub

Notice, here, that we're forcing the "SSN" rule to be considered broken
straight away. This is important, because, as the programmer, we know that the value is
blank to begin with, so we need to make sure the business rule is enforced right from the
start by indicating that it's broken.

An important note about events. An object can't raise events until it has been fully
instantiated, and an object isn't instantiated until after the Class_Initialize routine is
complete. This means that any RaiseEvent statements called during the Class_Initialize
routine won't actually raise any events – which is what we’d really like to have
happen.

In this example, we're indicating that a rule is broken, so the BrokenRule event should
fire; but it won't, because we're still in the Class_Initialize routine.

Handling the BrokenRule and NoBrokenRules Events

Our Person object needs to handle the events that will be created by the BrokenRules
object we just created. To make it easier for the UI developer, we'll also have our Person
object raise an event to let the UI know when the Person object becomes valid or invalid.

This first line that we'll add to handle these events goes in the General Declarations
section of the Person class module, and it declares the Valid event. We'll use this event
to indicate when our Person object switches between being valid and invalid:

Event Valid(IsValid As Boolean)

As we've seen, our BrokenRules object raises two events of its own: BrokenRule and
NoBrokenRules. The BrokenRule event will be fired whenever a new rule is broken, while the
NoBrokenRules will be fired any time the number of broken rules reaches zero.

By adding the following code to our Person object, we'll be able to react to these
events by raising our own Valid event to tell the UI whether the Person object is
currently valid:

Private Sub objValid_BrokenRule()
RaiseEvent Valid(False)
End Sub
Private Sub objValid_NoBrokenRules()
RaiseEvent Valid(True)
End Sub

If the BrokenRule event is fired then we know that at least one of our Person object's
rules is broken. We can then raise our Valid event with a False parameter to indicate to
the UI that the Person object is currently invalid. Likewise, when we receive a
NoBrokenRules event, we can raise our Valid event with a True parameter to indicate to the
UI that there are no broken rules and that our Person object is currently valid.

Implementing the IsValid Property

When we started this discussion, it was with the intent of creating an IsValid property
on our Person object so that the UI could check to see if the object was valid at any
given point. By implementing the Valid event in the previous section, we've actually
provided a better solution; but we can't assume that the UI can actually receive events,
since not all development tools provide support for them. The IsValid property therefore
remains very important.

Fortunately, our BrokenRules object makes implementation of the IsValid property very
trivial. The BrokenRules object provides us with a count of the number of broken rules,
and if that count is zero then we know that there are no rules broken. No broken business
rules translates very nicely into a valid Person object.

To implement our IsValid property, let's enter the following code into the Person class
module:

Public Property Get IsValid() As Boolean
IsValid = (objValid.Count = 0)
End Property

This property simply returns a Boolean value that's based on whether the broken rule
count equals zero or not.

Enforcing the SSN Business Rules

In our Person object's Class_Initialize routine, we added a line to indicate that our
SSN field was invalid. Since we're making it a required field, and all fields in a brand
new object are blank, we know that the rule is broken during the initialization of the
class.

We can therefore finish the job by adding some code to check our business rules within
the Person object's Property Let routine for our SSN property:

Public Property Let SSN(Value As String)
If Len(Value) > 11 Then _Err.Raise vbObjectError + 1001, "Person", "SSN too long"
udtPerson.SSN = Value
objValid.RuleBroken "SSN", (Len(Trim$(udtPerson.SSN)) <> 11) 
End Property

What we've done, here, is simply call the RuleBroken method of our BrokenRules object,
passing it the name of our rule "SSN" and a Boolean to indicate whether the rule
is currently broken. In this case, the rule just checks to make sure the value is exactly
11 characters long.

Anywhere that we need to enforce a rule in our object, we just need to add a single
line of code with the rule's name and a Boolean to indicate if it's broken or unbroken.
All the other details are handled through the BrokenRules object and the events that it
fires.

Using the Valid Event and IsValid Property in the UI

At this point, we've implemented the BrokenRules object to keep track of how many
business rules are broken within a business object. We've also enhanced our business
object, Person, to take advantage of the new BrokenRules object. All that remains is to
enhance our user interface to take advantage of the Valid event and IsValid property that
we just added to our Person object.

A rich user-interface should be able to enable or disable the OK and Apply buttons as
appropriate, so that they're only available to the user if the object is actually valid at
the time. Most users dislike clicking on a button only to be told that the requested
operation can't be performed. Ideally, when an operation can't be performed, the
associated buttons should be disabled.

In our case, the OK and Apply buttons are only valid if the business object itself is
valid. If the business object isn't valid then we can't save it, so both the OK and the
Apply button should be disabled.

Adding the EnableOK Subroutine

The easiest way to manage the enabling and disabling of these buttons is to put the
code in a central routine in the form. In our case, we'll need to add the following
routine to our EditPerson form in the PersonDemo project:

Private Sub EnableOK(IsOK As Boolean)
cmdOK.Enabled = IsOK
cmdApply.Enabled = IsOK
End Sub

Using the IsValid Property

When the Edit Person form loads, the first thing it needs to do is make sure that the
buttons are enabled properly. We need to check the Person object's IsValid property in our
Form_Load routine to find out if the business object is valid at this point. Add this line
of code to the EditPerson form's Form_Load routine:

Private Sub Form_Load()
Set objPerson = New Person
EnableOK objPerson.IsValid
End Sub

All we need to do is call our EnableOK subroutine, passing the Person object's IsValid
property value as a parameter. If the object is valid, we'll be passing a True value,
which will indicate that the OK and Apply buttons are to be enabled.

It might look as if we could rely on the Person object's Valid event to fire as the
object was created. We could then act on that event to enable or disable the two buttons
on our form. Unfortunately, objects can't raise events as they are being created, so there
is no way for our Person object to raise its Valid event while it's starting up. This
means we can't rely on the Valid event to tell us whether the object is valid as our form
is first loading.

Responding to the Valid Event

Once the EditPerson form is loaded and the user is interacting with it, we can rely on
the Person object to raise its Valid event to tell the form whether it is currently valid.
In our EditPerson form, we can add code to respond to this event, enabling and disabling
the OK and Apply buttons as appropriate:

Private Sub objPerson_Valid(IsValid As Boolean)
EnableOK IsValid
End Sub

Since the object will raise this event any time it changes from valid to invalid or
back, the UI developer can rely on this event to enable or disable the buttons for the
life of the form. All we need to do is call our EnableOK subroutine, passing the IsValid
parameter value to EnableOK to indicate whether to enable or disable the two buttons.

Removing the SSN Change Event Code

As our form currently stands, we have code in the Change event of the txtSSN control to
trap any error raised by our business object. Now that we’ve enhanced the business
object to utilize the BrokenRules object, we don’t need this code in the form. Change
the Change event code as shown:

Private Sub txtSSN_Change()
objPerson.SSN = txtSSN
End Sub

With this change we’re allowing the user to enter any value into the TextBox
control and thus into our object. We’re relying in the business logic in the object
to raise the Valid event to inform the UI when the user has entered valid data.

We can now run our program and see how well this works. When the EditPerson form first
appears, the OK and Apply buttons are disabled. As a result of the code we've just
entered, these buttons will only be enabled when there are exactly 11 characters in the
SSN field.

Handling Canceled Edits

Most forms have OK and Cancel buttons, and many have an Apply button as well. We've
included these on our sample form to illustrate how to support them within the object,
since there have been some extra steps involved.

So far, we've left the code in the buttons' Click events somewhat vague. Now let's
think through the behaviors we require of the business object to support OK, Cancel and
Apply.

We've implemented our form so that our Person object's properties are being set every
time a field changes on the form. This is important, because it means that any business
rules are validated as the user presses each key. A further ramification is that the
business object's internal variables are always changing as the user changes values on the
screen.

If the OK button is clicked, we just need to save the Person object's variables - and
the form goes away. We'll cover different ways of saving the object later in the chapter,
but the point here is that the OK button is pretty easy. After all, the object already has
its internal variables set and ready to be saved. Since the form goes away when OK is
clicked, we don't have to worry about any subsequent editing of the data.

The Cancel button is a different story. When the user clicks this button, the form goes
away, but the Person object might have different data stored in its variables, since the
user may well have been typing into some fields before they clicked the Cancel button.

On the surface, this might not seem like a problem: the form holds a reference to the
Person object and, when it releases that reference, the object, along with its changed
data, will just go away – or will it? Unfortunately, the object won't go away if some
other form or object also holds a reference to it. Perhaps we have a Family object that
holds references to a number of Person objects. If we were editing one of those Person
objects and we clicked Cancel, we'd expect the Person object itself to stick around as
part of the Family object; but we'd also expect any changes to its data to be reset to the
original values.

The Apply button ties in here as well. When the user clicks Apply, the Person object's
variables will be saved (as we'll see later). However, the user can keep editing the
object - because the form doesn't get unloaded by the Apply button.

To make matters just a bit more complicated, there are also combinations: the user
might do some editing, then Apply the edits, do some more work, and then click Cancel.
Given that sequence of events, we'd need to keep all the changes up to when the Apply was
clicked.

Enhancing the Person Object

Let's look at how we can make it easy for a UI developer to support these three
buttons. What we're talking about, here, is the ability to start editing the object, and
then either commit (Apply) or roll back (Cancel) the edits that were made.

Let's add three methods to our Person object: BeginEdit, ApplyEdit and CancelEdit:

Public Sub BeginEdit()
LSet udtSaved = udtPerson
flgEditing = True
End Sub

Public Sub ApplyEdit()
' data would be saved here
flgEditing = False
End Sub

Public Sub CancelEdit()
LSet udtPerson = udtSaved
flgEditing = False
End Sub

We'll walk through the details of these routines over the next few pages.

The ApplyEdit routine contains a comment to indicate where we need to add some code to
save the object to a database. In the section on Making Objects Persistent, later in this
chapter, we'll discuss saving objects to a database and we'll get into more details about
this process.

Right now, let's see how these three new routines provide support for the OK, Cancel
and Apply buttons. The code in these routines makes use of two new module-level variables
that we need to add to the General Declarations section of the Person class module:

Private udtSaved As PersonProps
Private flgEditing As Boolean

Are We Editing the Object?

The flgEditing variable is easy to follow: we just set it to True when we start editing
the object and False when we're done.

We do need to initialize this variable up front, however; so add the following line to
the Class_Initialize method of the Person class module:

Private Sub Class_Initialize()
Set objValid = New BrokenRules
objValid.RuleBroken "SSN", True
End Sub

By keeping track of whether our object is currently being edited or not, we can make
sure that our object's data is only changed when appropriate. We can use this flag to
disable all the Property Let routines in our object, so the only time a value can be
changed is when the object is in edit mode. By edit mode, I mean when the flgEditing flag
is set to True by the BeginEdit method.

Essentially, we are making the object somewhat self-aware. The object will know whether
it should allow any client code to change its data. The technique of viewing an object as
a self-aware entity is called anthropomorphization. The term is derived from anthropus for
human, and morph for change; we are changing our view of the object from a chunk of code
to an intelligent entity. This is one of the core tenants of object-oriented design.

Saving a Snapshot of the Object's Data

The udtSaved variable deserves a bit of explanation. Here's how we declare it:

Private udtSaved As PersonProps

Within our BeginEdit routine, above, the Person object's data is stored in this
udtSaved variable, which is based on the user-defined type PersonProps:

LSet udtSaved = udtPerson

There are a couple of important reasons for doing this, one of which is to make it easy
to handle canceled edits, since we've stored a version of the Person's object data prior
to the edit. The other reason is to make it easy to save the object to a database - but
again, we'll discuss this later on in the "Making Objects Persistent" section of
this chapter.

The Person object uses the udtPerson variable to store all its data values. The
udtSaved variable is a snapshot of the object's state at the time the BeginEdit method was
called. The copy is very fast and simple, since we use the LSet command to copy one
user-defined variable directly into another very efficiently. Visual Basic does virtually
no extra work during this call: it is, essentially, a memory copy function, and so it's
incredibly fast.

It is certainly possible to store an object's data in Private variables, then declare a
second set of the variables, and copy them all one by one to create a snapshot. But the
technique we're using with the LSet command is far faster, and it's easier to code.

Using the BeginEdit Method

The BeginEdit method starts the editing process for the object:

Public Sub BeginEdit()
LSet udtSaved = udtPerson
flgEditing = True
End Sub

The routine simply copies the object's current data into the udtSaved variable, in case
we need to get back to where we started. It also sets the flgEditing variable to True, to
establish that the object is being edited.

This method needs to be called by the UI before any editing takes place, so we'll now
add a line to call it from the EditPerson form's Load routine:

Private Sub Form_Load()
Set objPerson = New Person
EnableOK objPerson.IsValid
objPerson.BeginEdit
End Sub

Using the ApplyEdit Method

Once the EditPerson form has called the BeginEdit method on the Person object, the
object knows that it's being edited. The user can change data on the form to their heart's
content, but they'll eventually have to either save any changes or try to cancel them.

If the user clicks either OK or Apply, we need to save the changes in the object.
Therefore, we'll also add these lines to the Edit Person form:

Private Sub cmdApply_Click()
' save the object
objPerson.ApplyEdit
objPerson.BeginEdit
End Sub

Private Sub cmdOK_Click()
' save the object
objPerson.ApplyEdit
Unload Me
End Sub

Both routines call the ApplyEdit method, and the Apply button's code also calls the
BeginEdit method to resume the editing process for the object. We need to do this because
the Apply button doesn't make the form unload, so the user must be able to continue
editing the data at this point.

As you can see from the code above, the ApplyEdit routine sets the flgEditing variable
to False, indicating to the Person object that editing is complete. The ApplyEdit method
is also responsible for saving the object's data to the database: once again, a topic
we'll cover in more detail in the section on "Making Objects Persistent".

Using the CancelEdit Method

The user might also click the Cancel button, so we'll add the following line to the
EditPerson form:

Private Sub cmdCancel_Click()
' do not save the object
objPerson.CancelEdit
Unload Me
End Sub

All this line does is call the CancelEdit method within our Person business object.
Here's that CancelEdit method again:

Public Sub CancelEdit()
LSet udtPerson = udtSaved
flgEditing = False
End Sub

This routine does a couple things, including setting the flgEditing variable to False,
because the editing process is over, and restoring the Person object's data to the values
stored in udtSaved:

LSet udtPerson = udtSaved

Again, this is essentially a memory copy of the data that was saved in the BeginEdit
routine back into the object's central repository of data, udtPerson.

It's important to remember that for this to work, all the object's data must be in the
user-defined type. If data values were kept in module-level variables, we'd need some
extra code to save and restore those values.

Disabling Edits when flgEditing is False

The final set of changes we need to make to the Person object are to make sure that the
object doesn't allow itself to be edited until the BeginEdit method has been called. We
just need to add a line to each Property Let and Property Set to raise an error unless
we're in the middle of editing. We may also choose to disable certain methods; in
particular, those methods that impact our object's internal variables.

For example, we'd add this line to the Property Let Name routine:

Public Property Let Name(Value As String)
If Not flgEditing Then Err.Raise 383
udtPerson.Name = Value
End Property

Error 383 is an error that indicates a property is read-only, so if we don't want our
property to be editable we can just raise that error. Of course, we only need to raise it
if flgEditing is False, so once the BeginEdit method has been called the error won't be
raised.

This line just needs to be added to the top of each Property Let in the Person class
module. And we'll do essentially the same thing with the ApplyEdit and CancelEdit methods
in the Person class module, but we'll use error 445 for these instead. This error is
'Object doesn't support this action' and it's more appropriate for disabling methods:

Public Sub CancelEdit()
If Not flgEditing Then Err.Raise 445
LSet udtPerson = udtSaved
flgEditing = False
End Sub

These changes are easy enough to test by slightly breaking the code in our EditPerson
form. Just comment out the call to BeginEdit that we put in the Form_Load routine and then
run the program.

Now, any attempt to change values in the form's fields will result in error 383 being
raised. Clicking the Cancel button should result in error 445.

When you've finished this test, don't forget to uncomment the call to BeginEdit!

While a well-behaved form will never actually encounter these errors, they are vitally
important during development of the UI. Again, our objects must be written with the
assumption that the UI developer is not going to write the perfect set of code on the
first pass, so what we're doing here is helping the UI developer do their job while
protecting our objects from errant code.

Protecting Properties and Methods

In the previous section, we effectively switched some properties from read-write to
read-only, based on whether the object is flagged as editable. We also disabled some
methods using a similar technique.

In this section, we'll quickly run through the reasons why we might want to disable
properties or methods. We'll also consider exactly how to implement this disabling.

Read-only Properties

There are various reasons why we might need a read-only property. The most common is
where we have a property that's calculated from other data within the object. A good
example of this is the Age property in our Person object, which is read-only and is based
on the Birthdate.

Other properties might switch between read-write and read-only, depending upon business
rules or other criteria. We saw an example of this behavior in the previous section, where
we made properties read-only when the object was not in an editable state.

There are two techniques available to us for creating read-only properties. The
simplest technique is to provide no Property Let or Property Set routines for the
property. Of course, this technique allows no flexibility, in that we can't then make the
property read-write at runtime. The second technique available to us is to still create
the Property Let or Property Set routines, but to raise error 383 'Set not supported
(read-only property)' at the top of the routine when we want to make the property
read-only:

Public Property Let X(Value As Integer)
If condition_met Then Err.Raise 383
Regular code goes here
End Property

Disabling Methods

As we've seen, there are situations where we need to temporarily disable methods, Sub,
or Function code within our object. Depending on the state of the object, or various
business rules, we may need to effectively turn off a method.

Disabling a method is as simple as generating error 445 'Object doesn't support this
action' at the top of a method's code when we want that method disabled:

Public Sub X()
If condition_met Then Err.Raise 445
Regular code goes here
End Sub

We did this earlier, in the ApplyEdit and CancelEdit methods of our Person class. They
weren't appropriate unless the object was currently being edited and the flgEditing flag
was set to True.

Write-once Properties

There are also cases where we may wish to create a property that is only written once.
This is an excellent technique to use for unique key values that identify an object, and
for those situations where business rules dictate that data can not be changed once
entered. Real-world examples of this would include an electronic signature or an identity
stamp where an object is stamped with a security code that must not be changed.

Write-once properties are implemented using the same error raising technique as a
read-only property, but with a bit more logic to support the concept. As an example, let's
enhance our Person object's SSN property to be write-once. After all, once a person gets
assigned a social security number, they've got it for life.

Indicating when the Object is 'New'

We can't consider the SSN to be entered until the object is first saved by the user
when they click the OK or Apply buttons. At that point, we need to lock down the Property
Let SSN and any other write-once properties.

To lock down the value, we'll add a module-level variable to our Person class module to
indicate whether the Person object is 'new'. This same variable will come in useful,
later, when we talk about saving and restoring objects from a database, since a restored
object also needs its write-once properties locked out. Add this variable declaration to
the General Declarations section of our Person class module:

Private flgNew As Boolean

Since the UI might also care to know if an object is new or not (so it can alter its
appearance appropriately), we'll also create a read-only property for this purpose by
adding the following code to the Person object:

Public Property Get IsNew() As Boolean
IsNew = flgNew
End Property

Now let's implement the flgNew variable's operation. We need to start out assuming the
object is indeed new:

Private Sub Class_Initialize()
flgEditing = False
flgNew = True
Set objValid = New BrokenRules
objValid.RuleBroken "SSN", True
End Sub

Then we just need to flip the switch, when the Person object is saved, via the
ApplyEdit method:

Public Sub ApplyEdit()
If Not flgEditing Then Err.Raise 445
flgEditing = False
flgNew = False
' data would be saved here
End Sub

Disabling the Property

All that remains is to change the Property Let SSN routine to become disabled when the
object is no longer considered new:

Public Property Let SSN(Value As String)
If Not flgEditing Then Err.Raise 383
If Not flgNew Then Err.Raise 383
If Len(Value) > 11 Then _
Err.Raise vbObjectError + 1001, "Person", "SSN too long"
udtPerson.SSN = Value
objValid.RuleBroken "SSN", (Len(Trim$(udtPerson.SSN)) <> 11)
End Property

With these changes, the SSN can be entered and changed by the user up to the point
where they click OK or Apply. At that point, the Person object will be saved to the
database, the flgNew flag would be set to False, and the user will no longer be able to
edit the SSN field.

Testing the Code

We can test this in our program, even though the data isn't actually saved to the
database, since the ApplyEdit method still sets the flgNew flag to False when the Apply
button is clicked.

Run the program, enter 11 characters into the SSN field, and click the Apply button.
Now try to change the SSN field again. As a result of the code we've just entered, you'll
find that the SSN is fixed. Once the data is saved to the database our object won’t
allow it to be edited.

Of course the use of write-once properties isn’t without risk. After all, this
technique means that the user can’t change the SSN value after it’s been saved
to the database – even if it is incorrect.

Write-only Properties

Write-only properties are less common than read-only properties, but they do have their
uses. A good example of a write-only property is a password property on a security object.
There's no reason to read the password back, but we would, sometimes, want to set it.

As with read-only properties, there are two ways to create write-only properties. If we
don't need to dynamically change the status at run-time, we can implement a write-only
property by simply not writing a Property Get routine for the property.

However, if we do need to dynamically change a property from read-write to write-only
for some reason, we just need to raise error 394 'Get not supported (write-only property)'
at the top of the Property Get when the property needs to be write-only:

Public Property Get X() As Long
If condition_met Then Err.Raise 394
Regular code goes here
End Property

Making Objects Persistent

Throughout our discussion, so far, we've danced around the idea of making an object
persistent. Objects are great, but if we can't save an object into a database and then
retrieve it later, they're somewhat limited in their use.

As we've seen, business objects are intended to represent real-world entities. An
entity in the real world, such as a customer, has no concept of saving, adding, updating,
or deleting itself - those things just don't make sense. However, when we create an object
in software to model a customer or product, we need to compromise the model slightly in
order to handle these activities.

A primary goal in making business objects persistent is to minimize the impact to the
logical business model. We want a customer object to be a model of a customer, not a model
of a database record.

In this section, we'll discuss a couple of techniques that we can use to efficiently
save business objects to a database without compromising the integrity of the CSLA. Then,
we'll talk about what part of our program should actually do the work of saving and
retrieving the data. Finally, we'll look at the details of the persistence service that
we'll be using through the rest of the book.

The first thing we need to look at is exactly how are we're going to save an object to
the database. Virtually all the databases used today are relational databases, although
there are exceptions - such as hierarchical databases and object databases. Because of the
prevalence of relational databases, however, we're going to focus on saving and restoring
objects from that type of database, leaving the others to be covered elsewhere.

It's important to decide where to locate the code that takes care of saving or updating
an object in the database. The specifics of how to save an object's data will depend upon
where we put this code. There are three basic approaches:

  • Saving/restoring is handled by user-interface code
  • Business objects save/restore themselves
  • A specialized object manages the saving/restoring of business objects

The first solution may be valid in certain cases, but we won't cover it in any detail,
because it's directly opposed to an object-oriented solution. The second solution is very
valid, but not terribly scalable. In the end, we'll settle on the last solution: it works
well with our object-oriented design, and yet provides good scalability for our
applications.

But let's take a good look at each of these options, and consider their pros and cons
in more detail.

Saving Objects within a Form or BAS Module

In keeping with a more traditional Visual Basic development approach, it's possible to
put the persistence code in the form itself, or in a BAS module called by the form. Of
course, this means putting quite a bit of logic into the user-interface - something that I
don't recommended, since it makes our code much less general.

With this approach, every form that wants to use a given object will need the code to
save and restore that object from the database. This almost guarantees duplication of
code, and largely defeats the goals that we're trying to achieve by using object-oriented
design in our applications.

We're not going to go into the details of any code to support this solution, since it's
so directly in conflict with the principles we're trying to follow in this book.

I've made it very clear, so far, that the business objects are the application, and the
user-interface is pretty expendable. Thus it seems quite logical to assume that each
object should know how to save itself to the database. By just adding some form of Load
and Save methods to each object, it would appear that we've pretty much solved all our
persistence issues. Let's consider this approach.

Implementing a Load Method

We'll consider, as an example, the Person object that we developed in the previous
section.

Don't actually make any of these changes to the PersonObjects project, however, because
this is not yet the optimal solution to our problem. I'll clearly signal when we have
found the best solution.

Were we to implement a solution where objects saved themselves, we could add a Load
method like this (assuming we have a JET database with a Person table for our data):

Public Sub Load(SSN As String)
Dim recPerson As Recordset
Dim strConnect As String
Dim strSQL As String
strConnect = "Provider=Microsoft.Jet.OLEDB.3.51;" & _
"Persist Security Info=False;" & _
"Data Source=C:person.mdb"
strSQL = "SELECT * FROM Person WHERE SSN='" & SSN & "'"
Set recPerson = New RecordSet
recPerson.Open strSQL, strConnect
With recPerson
If Not .EOF And Not .BOF Then
udtPerson.SSN = .Fields("SSN")
udtPerson.Name = .Fields("Name")
udtPerson.Birthdate = .Fields("Birthdate")
Else
recPerson.Close
Err.Raise vbObjectError + 1002, "Person", "SSN not on file"
End If
End With
recPerson.Close
Set recPerson = Nothing
End Sub

This code would simply open the database, perform a lookup of the person (based on the
supplied social security number), and put the data from the recordset into the object's
variables. Also, note that the path we've used when opening our database is an absolute
one; you'll need to change it to point to your own database.

Since this code makes reference to a Recordset object, you would need to add a
reference in your project to ADO. Using the Project-References menu option you would
select the most up-to-date ADO reference, such as Microsoft ActiveX Data Objects 2.0
Library.

Using the Load Method from the UI

From the UI developer's perspective, this would be pretty nice, since all they'd need
to do would be to get the user to enter the social security number. Let's assume the UI
programmer stored this SSN number in a variable called strSSN. Some simple code to achieve
this (which ignores any input validation concerns right now), which could be placed in a
button event or the Form_Load routine, might run as follows:

Dim strSSN As String
strSSN = InputBox$("Enter the SSN")

Remember that we're not actually
making these changes to our Person project and PersonDemo UI, because this is not the
optimal solution to our problem.

The working UI code would then follow:

With objPerson
.Load strSSN
txtSSN = .SSN
txtName = .Name
txtBirthdate = .Birthdate
lblAge = .Age
End With

Of course, this code would not only put the values from the object into each field on
the form, but it would also trigger each field's Change event. These events are set up to
put the values right back into the object, which is a rather poor solution. We could,
perhaps, overcome this with a typical UI trick: a module-level flag to indicate that we
were loading the data:

Private flgLoading As Boolean

Then, at the top of each Change event, we'd just add the following line of code:

If flgLoading Then Exit Sub

And finally, we'd slightly alter the code that copied the values to the form's fields:

flgLoading = True
With objPerson
.Load strSSN
txtSSN = .SSN
txtName = .Name
txtBirthdate = .Birthdate
lblAge = .Age
End With
flgLoading = False

Saving the Object's Data through the ApplyEdit Method

Back in the Person object, we already have a method, ApplyEdit, to handle updating the
object; so we would just add some code to that routine:

Public Sub ApplyEdit()
Dim recPerson As Recordset
Dim strConnect As String
Dim strSQL As String
If Not flgEditing Then Err.Raise 445
flgEditing = False
flgNew = False
strConnect = "Provider=Microsoft.Jet.OLEDB.3.51;" & _
"Persist Security Info=False;" & _
"Data Source=C:person.mdb"
strSQL = "SELECT * FROM Person WHERE SSN='" & SSN & "'"
Set recPerson = New Recordset
recPerson.Open strSQL, strConnect
With recPerson
If Not .EOF And Not .BOF Then
.Edit
Else
.AddNew
End If
.Fields("SSN") = udtPerson.SSN
.Fields("Name") = udtPerson.Name
.Fields("Birthdate") = udtPerson.Birthdate
.Update
End With
recPerson.Close
End Sub

Since this would just update an already existing routine, we wouldn't need to make any
changes to our form's code to support the new functionality.

This solution is pretty nice. Probably the biggest benefit of objects that save
themselves is that they are very easy for the UI developer to understand. They simply need
to have some code to support the Load method, and they're all set.

It's worth noting that the way we would have coded the Load and ApplyEdit methods
essentially employed optimistic locking. This means that no lock is held on the data in
the database, so many users may bring up the same row of data at the same time.

Since no database connection was maintained once the data was loaded or saved, there
was no lock on the data in the database while the object was in memory.

We could very easily modify the code to maintain an open recordset as long as the
object existed, thus converting this to use pessimistic locking. This means that a lock
would be held on any data that is brought into our objects, preventing more than one user
from ever attempting to view or alter the same set of data.

This could become very resource intensive if there were a lot of objects, however,
since each object would have to maintain its own reference to a recordset.

There are a couple of drawbacks to this approach of objects saving themselves. One of
our primary goals in creating objects is to make them accurate models of the business
entities they represent. By putting all the data-handling code into the objects
themselves, we've effectively diluted the business logic with the mechanics of data
access. While we can't make our objects pure models of the business entities, it's
important to keep as close to that ideal as possible.

Another drawback is that we've tied the data handling and the business handling
together in a single object. If our intent is to distribute processing following the CSLA
then this approach doesn't help us meet that goal. What we really want is to separate the
data access code from the code that supports the business rules - so they can be
distributed across multiple machines if need be.

On the upside, this technique of objects saving themselves is very good for
applications where we know that the solution doesn't need to scale beyond a physical
2-tier setting. If the application will always be communicating with an ISAM database,
such as Microsoft Access, or directly with a SQL database, such as Oracle, then this may
be an attractive approach. Be warned, however: applications rarely stay as small as they
were originally designed, and it may be wise to consider a more scalable solution.

Objects That Save Other Objects

Now that we've discussed how the UI could directly save an object, and how an object
might directly save itself, we'll look at how we can design our business objects to rely
on another object to manage their data access. This approach has a couple of very
important benefits and is the one we'll use to write our video rental store application
through the book.

An important benefit of having an object save itself, as in the previous section, is
that the UI developer has a very simple and clear interface for loading and saving an
object. At the same time, we really don't want the data access code to be located in the
object itself, since that doesn't provide a very scalable solution.

One way to design an application's persistence is to create an object that the UI can
call when it needs to save a business object to the database. This new object is designed
to manage the persistence of the business object, and so it's called an object manager.
The object manager contains all the code to retrieve, store, and delete a business object
from the database on behalf of the UI.

Another alternative utilizes data-centric business objects. This is basically the
concept that we can take objects as we saw in the previous section, "Objects that
Save Themselves", and pull only the data-centric processing out into a new object.
Then the UI-centric business object can make use of the data-centric business object as
needed to retrieve, store, and delete its data from the database.

The object manager solution is a valid approach to designing an application with
distributed objects, and so we'll walk through how it could be implemented. Once we've
coded this solution, we'll see how easy it is to move from an object manager solution to
one using data-centric business objects.

One approach to persisting business objects is to create an object manager object for
use by the UI developer. The UI code would simply ask the object manager to load and save
objects on its behalf, typically passing the business objects themselves as parameters to
the Load and Save methods of the object manager.

This design is illustrated in the following figure:

As shown, the presentation tier, or UI,
will interact with both our UI-centric business objects, such as our Person object, and
the object manager that knows how to retrieve, save, and delete our business objects.

Sticking with the Person object we've used so far, let's take a look at a PersonManager
class. The PersonManager will handle all the details of retrieving, saving, and deleting
Person objects from the database. The UI code will ask the PersonManager to retrieve a
Person object when it wants one loaded from the database. The UI will also use it to save
a Person object back to the database or to delete a Person object.

An Object Manager as an Out-of-Process Server

One of our goals is to be able to put the PersonManager object on a separate machine
from the client workstation. This will allow us more flexibility in how we deploy our
application, as we can put this object manager on an application server machine and
increase the scalability of our application:

This figure shows where an object
manager, such as PersonManager, would fit into the CSLA. It also shows how the
PersonManager can be run on an application server machine separate from the client
workstation.

We don’t have a ‘data-centric’ object in this case, as the Object
Manager fills that role under this scenario. In many ways the Object Manager is analogous
to a data-centric business object, but we use the Object Manager in a different manner
than we would a data-centric business object.

For us to be able to run the PersonManager object on a separate machine, we'll need to
implement this class in a separate Visual Basic project from the Person class.

As we discussed earlier in this chapter, there are serious performance concerns when
communicating between processes or across a network. Since we're designing the
PersonManager to be at least out-of-process, and very likely on another machine across the
network, we need to take steps to make sure that our communication is very efficient.

Adding GetState and SetState methods to Person

To this end, we'll use the user-defined type and LSet technique discussed earlier in
this chapter. In order to implement this in the Person object itself, we need to add a
couple of new methods: GetState and SetState.

To use the LSet technique, we need to make sure our object's data is stored using a
user-defined type. Then we need to create a second user-defined type of the same length to
act as a buffer for all the detailed state data from our object.

We'll use the word state to describe the core data that makes up an object. An object
can be viewed as an interface, an implementation of that interface (code), and state
(data). An object could have a lot more data than is really required to define its state.
The state data includes only that which must be saved and restored to return an object to
its original state.

Our Person object already stores its state data in a user-defined type, PersonProps.
However, this type will be needed by both the Person object and the PersonManager object,
so we'll want to make it available to both. The easiest way to do this is to add a new
code module to the PersonObjects project and move the PersonProps type from the Person
class module into this code module. We'll call this new code module PersonUDTs:

Public Type PersonProps
SSN As String * 11
Name As String * 50
Birthdate As Date
Age As Integer
End Type

Notice that we've also changed the scope of the user-defined type from Private to
Public so that it will be available outside this code module.

Now we can add the following user-defined type to the PersonUDTs code module:

Public Type PersonData
Buffer As String * 67
End Type

This type will act as a buffer for the PersonProps data, allowing us to use the LSet
command to easily copy the detailed information from PersonProps into the simple string
buffer declared here.

With the user-defined types set up, we're all ready to add the GetState and SetState
methods to our Person class module:

Public Function GetState() As String
Dim udtBuffer As PersonData
LSet udtBuffer = udtPerson
GetState = udtBuffer.Buffer
End Function

To get the object's state, we just copy the detailed udtPerson variable into udtBuffer.
The udtBuffer variable is just a String, so we can return it as the result of the
function:

Public Sub SetState(ByVal Buffer As String)
Dim udtBuffer As PersonData
udtBuffer.Buffer = Buffer
LSet udtPerson = udtBuffer
CalculateAge
End Sub

To set the object's state, we simply reverse the process: accepting a string buffer and
copying it into udtBuffer. Then we just LSet udtBuffer's data into udtPerson, and the
object is restored.

We also make a call to the CalculateAge method to ensure that the read-only Age
property will return the correct value. Since the data that is passed into our object via
the SetState method bypasses all our Property Let routines we need to make sure that any
required processing (such as calculating the age) are performed as part of the SetState
method.

Given these methods, our PersonManager object needs to make only one call to retrieve
all of the data from our object, and make only one other call to send all the object's
data back. As we discussed earlier, this is very fast and efficient, even over a network
connection.

Cloning Business Objects with GetState and SetState

Since GetState and SetState simply copy and restore the object's state data, we can use
them for other purposes than persistence. They make a built-in cloning capability for each
object, since we can write code like this:

Dim objPerson1 As Person
Dim objPerson2 As Person
Set objPerson1 = New Person
Set objPerson2 = New Person
objPerson1.SetState objPerson2.GetState

The GetState method of objPerson2 simply converts that object's data into a string
buffer. That buffer is then passed to the SetState method of objPerson1, which converts it
back into detailed state data. We've moved all the detailed state data from objPerson2 to
objPerson1 in one line of code.

The Person Object's ApplyEdit Method

In our original Person object's ApplyEdit method, we inserted a comment to indicate
that this routine would be responsible for saving the object's data:

Public Sub ApplyEdit()
If Not flgEditing Then Err.Raise 445
flgEditing = False
flgNew = False
' data would be saved here
End Sub

This isn't actually true if we intend to use an object manager like PersonManager.
Instead, the PersonManager object itself will be responsible for saving the Person
object's data. We don't need to make any changes to the ApplyEdit method, but it's
important to recognize that the work involved in saving the Person object won't be done
here and that it will be handled by the PersonManager.

It could be argued that it's possible to merge the code from ApplyEdit into the
GetState method, since the GetState method will be called by PersonManager when it's
saving the object - so the edit process must be complete. Unfortunately, this would
introduce a side-effect into the GetState method that isn't intuitive. From the outside,
just looking at the name GetState, you'd never guess that it also ends the editing
process. To avoid confusion, methods should always be as descriptive as possible without
unexpected side-effects, and merging these two routines could easily cause such confusion.

Creating the PersonManager Object

Now that we've got the Person object ready to go, let's build the PersonManager object.
To start off, make sure you've saved the PersonObjects project and, with the File-New
Project menu option, start a new ActiveX EXE project. Set the project's Project Name to
PersonServer under the Project-Properties menu option.

Change the name of Class1 to PersonManager and make sure its Instancing property is set
to 5-Multiuse.

Since we'll be sending data back and forth between our Person object and the
PersonManager object through the use of the LSet technique, it's important that both
objects have access to the PersonProps and PersonData user-defined types. Fortunately,
we've put those types into the code module named PersonUDTs, so we can choose Project-Add
File and add that code module to our new PersonServer project. Now both the PersonObjects
and PersonServer projects have access to the exact same code module containing our
user-defined types.

Now we're ready to add some code to the PersonManager class.

Adding a Load Method to PersonManager

The UI code will use the PersonManager object to load object data from the database
into a new Person object. To do this, we'll implement a Load method on the PersonManager
object for use by the UI developer. This is where things get interesting.

At the very least, we need to pass the Load method an identifier so that it can
retrieve the right person. In this example, we'll pass the social security number, with
the assumption that it provides a unique identifier for an individual.

Now we need to figure out how to get the data back to the client and into an object.
Ideally, we'd like to make the Load method a function that returns a fully loaded Person
object. This would mean our UI client code could look something like this:

Dim objPersonManager As PersonManager
Dim objPerson As Person
Set objPersonManager = New PersonManager
Set objPerson = objPersonManager.Load(strSSN)

Unfortunately, this is difficult at best. In order to return an object reference, the
Load method needs to create an object. When an object is created, using either New or
CreateObject, it's instantiated in the same process, and on the same machine, as the code
that creates it. This means that a Person object created by the PersonManager's Load
method would be created, in the PersonManager object, on whatever machine that code is
running.

We need the Person object to be created on the client machine, inside our client
process. This means that the code to instantiate the object needs to be in that process as
well. As a compromise, let's make our client code look something like this:

Dim objPersonManager As PersonManager
Dim objPerson As Person
Set objPersonManager = New PersonManager
Set objPerson = New Person
objPersonManager.Load strSSN, objPerson

This way, the object is created in the client, but we'll pass it as a reference to the
PersonManager object to be loaded with data.

Given this approach, let's enter the following code, for the Load method itself, into
the PersonManager class module:

Public Sub Load(ByVal SSN As String, Person As Object)
Dim recPerson As Recordset
Dim strConnect As String
Dim strSQL As String
Dim udtPerson As PersonProps
Dim udtBuffer As PersonData
strConnect = "Provider=Microsoft.Jet.OLEDB.3.51;" & _
"Persist Security Info=False;" & _
"Data Source=C:person.mdb"
strSQL = "SELECT * FROM Person WHERE SSN='" & SSN & "'"
Set recPerson = New Recordset
recPerson.Open strSQL, strConnect
With recPerson
If Not .EOF And Not .BOF Then
udtPerson.SSN = .Fields("SSN")
udtPerson.Name = .Fields("Name")
udtPerson.Birthdate = .Fields("Birthdate")
LSet udtBuffer = udtPerson
Person.SetState udtBuffer.Buffer
Else
recPerson.Close
Err.Raise vbObjectError + 1002, "Person", "SSN not on file"
End If
End With
recPerson.Close
End Sub

Once again, this code makes reference to a Recordset object, so you may need to add a
reference in your project to the ADO. Use the Project-References menu option and select
the most up-to-date ADO reference, such as Microsoft ActiveX Data Objects 2.0 Library.

For the most part, this is pretty straightforward database programming, but let's walk
through the routine to make sure everything is clear.

The code opens the database and builds a recordset based on a SQL statement using the
social security number:

strConnect = "Provider=Microsoft.Jet.OLEDB.3.51;" & _
"Persist Security Info=False;" & _
"Data Source=C:person.mdb"
strSQL = "SELECT * FROM Person WHERE SSN='" & SSN & "'"
Set recPerson = New Recordset
recPerson.Open strSQL, strConnect

If we successfully retrieve the data, we just load that data into our user-defined type
and use LSet to copy the detailed data into a user-defined type that represents a single
string buffer:

udtPerson.SSN = .Fields("SSN")
udtPerson.Name = .Fields("Name")
udtPerson.Birthdate = .Fields("Birthdate")
LSet udtBuffer = udtPerson

Now that we've got all the data in a single string, we can just make a single call to
the Person object's SetState method, as discussed above:

Person.SetState udtBuffer.Buffer

This technique does require that both the detail and buffer user-defined types be
available to both the business object project and the PersonManager object. The best way
to handle this is to put the UDT definitions in a BAS module and include that module in
both projects. Better yet, if you're using source code control such as Visual SourceSafe
then you can link the file across both projects and allow the source control software to
keep them in sync.

Once we've called the Person object's SetState method to pass it the data from the
database, the UI will have a reference to a fully loaded Person object. Then the UI code
can use that Person object through its properties and methods.

Adding a Save Method to PersonManager

At some point, the UI will need to save a Person object's data to the database. To do
this, it will use the PersonManager, so we'll add a Save method to the PersonManager
object to handle the add and update functions.

The Save method can have a fairly simple interface, since all we really need to do is
send down a reference to the object itself. The code can then directly call the GetState
method of the Person object to retrieve its data. Here is the code:

Public Sub Save(Person As Object)
Dim rsPerson As Recordset
Dim strConnect As String
Dim strSQL As String
Dim udtPerson As PersonProps
Dim udtBuffer As PersonData
udtBuffer.Buffer = Person.GetState
LSet udtPerson = udtBuffer
strConnect = "Provider=Microsoft.Jet.OLEDB.3.51;" & _
"Persist Security Info=False;" & _
"Data Source=C:person.mdb"
strSQL = "SELECT * FROM Person WHERE SSN='" & udtPerson.SSN & "'"
Set rsPerson = New Recordset
rsPerson.Open strSQL, strConnect, adLockOptimistic
With rsPerson
If Person.IsNew Then .AddNew
.Fields("SSN") = udtPerson.SSN
.Fields("Name") = udtPerson.Name
.Fields("Birthdate") = udtPerson.Birthdate
.Update
End With
rsPerson.Close
End Sub

A good question, at this point, might be: why pass the object reference when we could
just pass the state string returned by GetState? In this case, it would accomplish the
same thing, but with one less out-of-process or network call.

Suppose, however, that the Person object also included a comment field, a dynamic
string in the object, and a memo or long text field in the database. Since this variable
would be dynamic in length, we couldn't put it into a user-defined type, and so we
couldn't easily pass it within our state string.

In a case like this, the Save method may not only need to use the GetState method, but
it may also have to use a GetComment method - which we'd implement in the Person object to
return the comment string.

Basically, by passing the object reference, rather than just the state string, we've
provided ourselves with virtually unlimited flexibility in terms of communication between
the PersonManager and Person objects.

To save a Person object, the code in our form's cmdOK_Click and cmdApply_Click event
routines will need to be updated. Open our PersonDemo project and bring up the form’s
code window. Change these two routines as shown:

Private Sub cmdApply_Click()
Dim objPersonManager As PersonManager
' save the object
Set objPersonManager = New PersonManager
objPersonManager.Save objPerson
objPerson.ApplyEdit
objPerson.BeginEdit
End Sub

Private Sub cmdOK_Click()
Dim objPersonManager As PersonManager
' save the object
Set objPersonManager = New PersonManager
objPersonManager.Save objPerson
objPerson.ApplyEdit
Unload 3Me
End Sub

Since we're passing the Save method a reference to the objPerson object, it can
retrieve the data from the Person object and write the data out to the database.

Adding a Delete Method to PersonManager

At this point, our UI code can use the PersonManager object's Load method to retrieve a
Person object and the Save method to add or update a Person object into the database. The
only remaining operation we need to support is removal of a Person object from the
database.

To provide this support, we'll add a Delete method to the PersonManager object. We can
use the same identity value for the Delete that we used for the Load; in this case, the
social security number. And since we don't need the object's data, we don't need to worry
about passing the object reference at all:

Public Sub Delete(SSN As String)
Dim cnPerson As Connection
Dim strConnect As String
Dim strSQL As String
strConnect = "Provider=Microsoft.Jet.OLEDB.3.51;" & _
"Persist Security Info=False;" & _
"Data Source=C:person.mdb"
strSQL = "DELETE * FROM Person WHERE SSN='" & SSN & "'"
Set cnPerson = New Connection
cnPerson.Open strConnect
cnPerson.Execute strSQL
cnPerson.Close
Set cnPerson = Nothing
End Sub

To delete a Person object, the UI code would look like this:

Dim objPersonManager As PersonManager
Set objPersonManager = New PersonManager
objPersonManager.Delete objPerson.SSN

This might be implemented behind a Delete button or a menu option - whatever is
appropriate for the specific user-interface.

Testing the Save Method

We should be able to immediately try out the PersonManager object's Save method. To do
this, we'll need to compile our PersonServer project into an EXE. This is done using the
File-Make PersonServer.exe menu option from within Visual Basic.

Once the PersonServer project has been compiled, we're almost ready to run our
PersonDemo program. Load up the PersonDemo project again, and just add a reference to the
PersonServer using the Project-References menu option.

Now run the PersonDemo program. The form will come up as always, allowing us to enter
information into our Person object. However, with the changes we just made to the code
behind the OK and Apply buttons, clicking either one should cause our Person object's data
to be saved to the database by our PersonManager object.

Testing the Load Method

The Load method is a bit trickier, since we need to come up with some way to get the
SSN value from the user before we can call the method. The UI code we looked at for
calling the Load method assumed we already had the SSN value.

Enter the following lines into our EditPerson form's Form_Load routine; this way, we
can load a Person object as the form loads:

Private Sub Form_Load()
Dim objPersonManager As PersonManager
Set objPerson = New Person
Set objPersonManager = New PersonManager
objPersonManager.Load strSSN, objPerson
EnableOK objPerson.IsValid
objPerson.BeginEdit
End Sub

We can easily enhance this by using the InputBox$ function to ask the user for the SSN.
In the PersonDemo project, add the following to the Form_Load method:

Private Sub Form_Load()
Dim strSSN As String
Dim objPersonManager As PersonManager
Set objPerson = New Person
Set objPersonManager = New PersonManager
strSSN = InputBox$("Enter the SSN")
objPersonManager.Load strSSN, objPerson
EnableOK objPerson.IsValid
objPerson.BeginEdit
End Sub

This gets us almost there. If the user supplies a valid SSN value then our Person
object will be loaded with the data from the database. All that remains is to update the
display on the form, so let's add these lines to the Form_Load routine:

Private Sub Form_Load()
Dim strSSN As String
Dim objPersonManager As PersonManager
Set objPerson = New Person
Set objPersonManager = New PersonManager
strSSN = InputBox$("Enter the SSN")
objPersonManager.Load strSSN, objPerson
flgLoading = True
With objPerson
txtSSN = .SSN
txtName = .Name
txtBirthDate = .BirthDate
lblAge = .Age
End With
flgLoading = False
EnableOK objPerson.IsValid
objPerson.BeginEdit
End Sub

Notice, here, that we're using the module-level variable trick we saw in an earlier
discussion where we looked at objects that save themselves: we set flgLoading to True
while we're loading information into our form, so that we can switch off the form's Change
events of the text fields.

Therefore, we also need to declare this module-level variable in the General
Declarations area of our EditPerson form:

Private flgLoading As Boolean

and we need to add this line to all the Change events in the EditPerson form:

Private Sub txtName_Change()
If flgLoading Then Exit Sub
objPerson.Name = txtName
End Sub

If we run our PersonDemo program now, we'll be prompted for an SSN value. Entering a
valid SSN should cause that Person object to be displayed. Of course, we don't have any
error trapping for invalid SSN entries, but this demonstrates the basic concepts of saving
and restoring an object from the database. We'll build a more robust application based on
these general techniques starting in Chapter 5.

The use of an object manager makes the process of retrieving, saving, and deleting an
object pretty straightforward. However, compared to having an object save itself, this is
a bit more complex from the UI developer's viewpoint. The UI developer not only needs to
understand how to create and use the business objects themselves, but they need to
understand how to create and use the objects that manage the persistence. This seems like
extra work for little gain.

On the other hand, it's a small step from having the UI developer call the persistence
manager object, as we've just seen, to having the business object itself call the
persistence manager object:

This figure indicates that the presentation tier, or UI, only needs to interact with
our UI-centric business objects. The UI developer just uses simple Load and ApplyEdit
methods on the UI-centric business object, and lets the UI-centric business object take
care of asking the data-centric object to actually retrieve, save, or delete the data. The
UI developer doesn't have to worry about any of the code to persist the object. At the
same time, we get the benefit of having the persistence code in a separate object that can
be distributed across the network.

Let's look at how we can modify the previous example to use this new and improved
technique. Fortunately, the changes aren't too difficult, so this will go quickly.

Simplifying the Code in the Form

First, let's look at the UI code. The whole idea is to simplify it, and we can easily
do that. In fact, we can return it to the form it was in earlier in the chapter, before we
implemented the PersonManager. The Apply and OK button code needs to be changed to appear
as follows:

Private Sub cmdApply_Click()
objPerson.ApplyEdit
objPerson.BeginEdit
End Sub

Private Sub cmdOK_Click()
objPerson.ApplyEdit
Unload Me
End Sub

We also have code in the Form_Load routine to get an SSN value from the user and to
load our Person object using the PersonManager object. We'll need to simplify that code as
well:

Private Sub Form_Load()
Dim strSSN As String
Set objPerson = New Person
strSSN = InputBox$("Enter the SSN")
objPerson.Load(strSSN)
flgLoading = True

In all three routines, the big difference is that our form's code doesn't need to
create or deal with a PersonManager object. All the UI developer needs to be concerned
with are the basic methods provided by the UI-centric business object, which is our Person
object.

Since we no longer need to use the PersonServer project from the PersonDemo UI project,
we need to use the Project-References menu option and remove the reference to the
PersonServer project.

Adding a Load Method to Person

The changes to the Person object itself are a bit more extensive. Earlier in the
chapter, we discussed having business objects save themselves to the database. To do this,
we created a Load method for the object and enhanced the ApplyEdit method. Basically, we
want to do the same thing here, except that the actual database code will be in a separate
object, PersonManager.

First, let's add a Load method to the Person object. Enter the following code into the
Person class module in the PersonObjects project:

Public Sub Load(SSN As String)
Dim objManager As PersonManager
Set objManager = New PersonManager
objManager.Load SSN, Me
Set objManager = Nothing
End Sub

This should look familiar, as it's pretty much the same code we just dealt with when
having the client talk to the PersonManager. The code simply creates a PersonManager
object, and then asks it to retrieve the object's data by passing the social security
number and Me, a reference to the current object.

You'll now need to add a reference to the PersonServer project from within your
PersonObjects project. Use the Project-References menu option, and select PersonServer.

It's worth noting, at this point, that the PersonManager object itself is entirely
unchanged from our previous example. By simply adding a few extra lines of code into our
business objects, we've dramatically simplified the UI developer's job, and we don't even
have to change the objects that contain the data access code.

Updating the Person Object's ApplyEdit Method

Back to the Person object, we also need to change the ApplyEdit method to talk to the
PersonManager:

Public Sub ApplyEdit()
Dim objManager As PersonManager
If Not flgEditing Then Err.Raise 445
Set objManager = New PersonManager
objManager.Save Me
Set objManager = Nothing
flgEditing = False
flgNew = False
End Sub

Again, all we've done is created a PersonManager object, just like we did in the UI of
the previous example. Then we just call its Save method passing Me, a reference to the
current object, as a parameter. Of course, the UI developer simply calls ApplyEdit to save
the object to the database; they don't need to worry about any of these details.

Adding a Delete Method to Person

Finally, let's add a Delete method to the Person business object. After all, we've
already got that capability built into PersonManager; we just need to make it available to
the UI developer via the business object:

Public Sub Delete()
Dim objManager As PersonManager
Set objManager = New PersonManager
objManager.Delete udtPerson.SSN
Set objManager = Nothing
End Sub

The only real drawback to this implementation of a Delete method is that the UI code
may cheat and continue to retain a reference to the Person object after the Delete method
has been called. This can make it much harder to debug the UI, since the developer may not
immediately spot the fact that they are still using an object that has theoretically been
deleted. One solution to this problem is to maintain a Boolean variable inside the object
to indicate that the object has been deleted. Using this variable, we can add a line at
the top of every property and method to raise the appropriate error to disable them, as
discussed earlier in the chapter.

This final approach to object persistence is a combination of the other techniques. It
takes the best from each and puts them together to create an object to manage our data
access that is transparent to the UI developer, easy to use for the business object
developer, and that can be distributed to a central application server machine.

Testing the Code

The project should now run just as it did when we ran it under the full auspices of the
PersonManager - we're able to perform deletions and apply edits as normal.

This is as far as we shall pursue our PersonDemo project. It's still in a pretty rough
form, and there are lots of things we could do to improve it, of course; but the main
functionality is in place, and it's served our purposes as we've explored a number of key
concepts in this chapter. We're now ready to move on to bigger and better things - such as
our video rental store project, where we'll develop the techniques we've learnt with our
PersonDemo project to produce a sophisticated set of business objects and U

In this chapter, we returned to the CSLA. We looked at how that logical architecture
can be implemented on a single machine and across multiple machines. As the number of
physical tiers is increased, we can gain better distribution of the processing -
increasing our flexibility and scalability with each layer. Of course, each extra layer of
hardware can add communications overhead to our application.

We explored some of the business object design issues that impact user-interface
developers. One of the primary goals of a business object developer is to make it easy for
UI developers to work with the objects. At the same time, the business objects must
protect themselves. Essentially, the business object developer must assume that the
user-interface code will do something to break the objects - and the developer must take
steps to prevent that from happening.

We wrapped up the chapter by looking at different techniques that we can use to save
and restore object data in a database, or make objects persistent. Making objects
persistent is the key to creating client/server applications using business objects. Most
of an application's performance issues surround the techniques used to persist objects in
a database, so it's important to choose the appropriate technique for each application.

We'll continue to explore the concepts from this chapter throughout the remainder of
the book. Using the video store example from Chapter 3, we'll walk through the development
of a series of applications by applying the CSLA to each typical physical model. Here's
what we'll be looking at through the next few chapters:

  • Chapter 5 Build the simpler objects for a Video store
  • Chapter 6 Build more complex parent-child objects for a Video store
  • Chapter 7 Create a UI using Visual Basic forms
  • Chapter 8 Add code to our objects to save themselves to the database
  • Chapter 9 Create a UI using Microsoft Excel
  • Chapter 10 Using data-centric business objects over Distributed COM
  • Chapter 11 Distributing objects over DCOM
  • Chapter 12 Using data-centric business objects with Microsoft Transaction Server
  • Chapter 13 Active Server Pages and HTML as a front end
  • Chapter 14 Using an IIS Application as a front end
  • Chapter 15 Create a UI using a DHTML Application

Latest Posts

Related Stories