Introduction to LDAP (Lightweight Directory Access Protocol), Page 6
However, not every connected data store is a candidate for consolidation. Take, for example, a human resources application that relies on a set of database tables to store information. It may not make sense from an application functionality perspective for that particular application's data store to be consolidated into an enterprise directory. Some of the information may fit better in relational databases for the reasons we stated in section 1.2.1, whereas other information may not be a good candidate for synchronization because of privacy concerns. So, instead of attempting to directly replicate everything from human resources into the directory, you need a form of intelligent synchronization.
In the area of identity management, directory integration almost always seems like a great idea in theory. For example, the management of users' computer accounts in a particular organization from hire to fire demonstrates the value of synchronization and other advanced integration technology.
Today, it is often necessary to touch multiple data repositories to commit a single change uniformly to all the places that store information about a person. These changes are usually performed by different application and system administrators. In more mature environments, changes may be synchronized with scripts to facilitate this process. When administrators do not coordinate their changes, or if an automated synchronization script fails, the data repositories are no longer synchronized, and at least one of the repositories will contain stale data.
If this stale data is simply a telephone number, the impact is probably minimal. However, if an account must be deleted or suspended due to an employee's termination, the data repository with stale data is at risk from the terminated employee. If the stale data resides in an enterprise directory that is used for authenticating and authorizing users to all non-legacy systems and applications, this one failed change can potentially put the organization's entire intranet at risk. Proper directory integration is key to reducing these types of risks. For this reason, it is important to spend an adequate amount of time planning for integration.
A general integration planning process entails identifying which data elements existin each existing data source, selecting those that should be shared, and mappingbetween the source and destination schema (see figure 1.23).
Multiple data repositories typically store information about a person. Deciding which attributes come from where and mapping them to a normalized schema is an important part of any directory integration process. Note that the word normalized here should not be confused with database normalization rules.
This process and ways of implementing it are described in detail in chapter 7.
1.7.1 Integration via metadirectories
We cannot emphasize enough that the consolidation of all data repositories into a single enterprise directory within even the smallest of organizations is not likely to happen in our lifetimes. Even if it were possible to rewrite every legacy application to use a single standard, different directory and database software is better for different tasks. As shown in figure 1.24, this leads to many different environments within an organization that have different variations of the same user.
Different applications have different data repository requirements. It is not likely that a single data store could accommodate all of them.
In the past few years, a new breed of applications called metadirectories has come to market to remove some of the burden associated with directory integration. Although it may sound like yet another directory, a metadirectory is really a sophisticated directory integration toolkit.
You can use metadirectories to connect and join information between data sources, including directories, databases, and files. The connection process usually involves identifying changes in each data source. Such a connection may be real-time monitoring of changes using a direct access method into the connected data store, an occasional scan of a file-based list of changes, or a review of a full export from the connected data store.
The join process is much more complicated and usually involves several steps. Its most important job is determining that an object in one data source is the same as an object in a second data source. This aggregation of information from multiple data sources is one of the most important features of a metadirectory and the heart of the join process. Other tasks performed by a metadirectory may include unification or mapping of schema and object names, filtering unwanted information, and custom processing and transformation of data. Figure 1.25 shows a relatively logical view of how a metadirectory might work to provide a linkage between key enterprise information repositories.
Metadirectories provide advanced integration capabilities between different types of data stores.
With careful planning, you can create an environment in which users can be created at a single point. Then, the metadirectory service will instantiate a subset of the users' information in other connected data stores automatically, or with very little manual intervention. The actual point of instantiation may be managed by another type of software that handles the workflow needed by this process. Such software is called provisioning software.
For example, if PeopleSoft, white pages, and an Oracle database all use a telephone number, you would like that telephone number to be entered once and propagated to the other data stores. Metadirectories must also handle environments where both Oracle and PeopleSoft would be able to master new changes depending on business rules.
Metadirectories are also proving to be popular in extranet environments where two or more organizations have their own directories and want to share a portion of them with business partners or vendors. Figure 1.26 shows an extranet environment where the addition of Joe Distributor might be propagated to the manufacturer using metadirectory technology.
A user is entered into the distributor directory. The metadirectory detects a change and propagates it to an appropriate location within the manufacturer directory.
It is beyond the scope of this book to offer an in-depth look at metadirectory products. However, directory integration is critical, and some of the functionality provided by metadirectory products can be performed with a general scripting language. We discuss such techniques in detail in chapter 6.
1.8 INTEGRATION AND FEDERATION VIA VIRTUAL DIRECTORY TECHNOLOGY
Usually, metadirectories involve the creation of a new, physical directory, the contents of which are based on an aggregation of multiple information sources. One emerging alternative to metadirectory technology is virtual directory technology, sometimes called directory federation technology. This technology attempts to provide real-time directory access to other types of data stores, such as relational databases and memory-based application components. To visualize this process a bit more easily, think of the virtual directory as a kind of proxy server: the application speaks LDAP to the virtual directory software, and the virtual directory software grabs the data directly from the legacy data store by speaking its native tongue. Figure 1.27 shows a directory-enabled application accessing a virtual directory service that is providing data from existing directories, databases, and application components.
Virtual directories (sometimes called directory federators) accept directory requests and transform them into requests for potentially non-directory information.
Virtual directory technology is not as easy as it may sound. Each underlying data store has its own query language and information model. The virtual directory must find ways to optimize queries and map between LDAP and non-directory information models.
At this time, virtual directory technology is in its infancy, as metadirectories were a few years ago. However, it is emerging as another useful tool for providing a unified view of information to LDAP-enabled applications. It is the only way to view information in many kinds of existing repositories using directory protocols in real time.
1.9 WHY THIS BOOK?
People who have worked with directories know that installing and configuring most directory server software is generally the easiest part of a directory deployment. Writing simple applications to query the directory and use the results is also quite easy, once you understand the basics. Trouble begins to brew when it becomes necessary to keep the information in the directory up to date through both front-end data management and back-end integration with other data sources. This book focuses on making your directory deployments more successful through advanced application and interdirectory integration.
Consider that every element of data stored in a directory must be placed into the directory at some point. You can leverage the data that already exists in other repositories, someone can enter it into the directory through an administrative interface, or the data can be generated by an application. In many environments, all these tasks may need to happen to create a suitable directory service. Figure 1.28 shows some of these different techniques for moving information into the directory.
Data in directories is synchronized with existing data stores, managed through administration applications, and/or generated in some way.
For new and experienced directory service managers charged with deploying or managing a directory service, these management and integration issues are clearly the biggest challenge. Not having the right information, or having stale versions data, dilutes the value of the directory to all applications that leverage it.
Directory management involves having the right tools and tying in the right information from other, often authoritative, sources of data. In this book, we'll focus on practical solutions to common directory management problems. We will look at Perl code for administration interfaces, directory synchronization, and directory migration. The entire second part of the book is devoted to this topic.
Directory-enabled applications let you use all the information you've been collecting in directories. After all, why collect data if it nobody wants to use it? We'll look at ways to leverage LDAP in a variety of application environments with source code in Java. You'll find that such application integration is key to having a useful and important directory that people want to keep current.
With the information in this book, you'll have information flowing through your directories with much less perspiration. Servers that support the LDAP standard can provide a wide variety of functionality to a properly enabled application. This book aims to help you manage your LDAP directories and enable your applications, both new and existing, to support these directories.
The LDAP standard for accessing directory services is important to software developers and system administrators. It can be used through LDAP-enabled applications and various APIs.
A number of different directory services have come into existence in the past few decades; LDAP was derived from another popular standard called X.500. These directory services provide everything from white pages to application security.
Management and application integration are the two biggest issues people tend to encounter when deploying directory services. You can address these issues many ways, as the second and third parts of this book explain.
The IETF has been the driving force behind the core LDAP specifications and many enhancements. Its most important current work is related to replication and access control. Other industry consortia and standards bodies are important in developing LDAP server and application interoperability guidelines, as well as standards that represent data from the LDAP information model in XML.
Metadirectories provide synchronized integration between multiple data repositories, and virtual directories provide real-time integration between applications and existing data via directory protocols. Provisioning tools allow for manual management of the information in directories. Each of these types of tools plays an important role in a well-rounded directory service.
In the remainder of part 1, we will focus on the LDAP standards in more detail, and discuss how to use LDAP tools to communicate with a directory server.
About the Author
Clayton Donley, the co-author of a number of open-source LDAP modules for Perl and Apache, is an independent consultant based in the Chicago area. His clients include Netscape, GTE, and ABN-AMRO. Prior to going independent, he spent seven years in various information technology roles working for Motorola in both the Chicago area and the Asia-Pacific region.
Source of this material
|This is Chapter 1: Introduction to LDAP from the book LDAP Programming, Management and Integration (ISBN:1-93011-040-5) written by Clayton Donley, published by Manning Publications Co..|
To access the full Table of Contents for the book.
Other Chapters from Manning Publications:Struts in Action: Developing Applications with Tiles
Page 6 of 6