Security Through the Lifetime of a Managed Process: Fitting It All Together
Loading Classes from an Assembly
Assuming that the assembly's minimum requests have been satisfied and that the assembly is indeed granted the right to run code, control passes from the Policy Manager to the Class Loader. Classes are retrieved lazily from the containing assembly; if you access a single class from an assembly containing a hundred classes, only Execution-Time Security Issues 175.that single class is touched by the Class Loader. The Class Loader is responsible for laying out in memory the method tables and data structures associated with the class and verifying access (visibility) rules for classes and interfaces. After the class data structures have been properly initialized, we are ready to access individual methods defined within the class.
Just-In-Time Verification and Compilation of Methods
Referring back to Figure 1, after the Class Loader has finished its work, we are ready to verify the MSIL container within the assembly and generate native code from it. This is the job of the Just-In-Time compiler and type-safety verifier (also known as the JIT compiler/verifier). As with loading classes from an assembly, methods within a class are JIT verified and compiled lazily on an as-demanded basis. When a method is called for the first time within the process, the MSIL for the method is checked for compliance with the published type-safety rules and then (assuming it passes) converted into native code.
After a method has been successfully processed by the JIT compiler/verifier and converted to native code, it is ready to be executed. As the method executes, references to unprocessed methods, classes, and assemblies can occur. These references will cause the Runtime to recursively call the JIT compiler/verifier, Class Loader, or Assembly Loader and Policy Manager as necessary. These operations happen implicitly, "under the covers," as the managed code within the application executes.
Execution-Time Permission Enforcement
So far, everything that we have described in this section concerning the operation of the execution engine is largely transparent to the code author and the user. Assembly references are resolved, assemblies are loaded, policy evaluation occurs, classes are laid out, and MSIL is verified and converted into native code, all without any external indication or control. (Of course, in the event that an error occurs at any stage in this processing pipeline, such as the failure of an inheritance demand in the JIT compiler/verifier, an exception will be generated and program execution will not proceed normally.) However, essentially all of the security processing that occurs from assembly load through JIT compilation/verification really exists primarily to set up the execution environment for execution-time permission enforcement.
Execution-time permission enforcement, the final security-related component in Figure 1, is the raison d'être of the .NET Framework security system. All of the evidence gathering, policy specification, and evaluation that has occurred up to this point was performed simply so that we can associate a permission grant set with each assembly at execution time. We need that association between assemblies and their grant sets so that we can properly perform and evaluate stack-walking security demands. As a program runs within the Runtime, security demands will generally occur as a result of one of the following three actions:
- An explicit call to a permission's Demand() method
- An explicit call to a method protected by a declarative security attribute specifying a demand
- Implicitly as part of a platform invoke or COM interop call, because all calls to native code through these mechanisms are automatically protected by security checks
The goal of a stack-walking permission check is to verify that the demanded permission is within the policy grants of all the code in the call chain. For every piece of code on the call stack above the method performing the check, the Runtime security system ensures that the code has been granted the particular demanded permission by the Policy Manager. Whenever a method within a secured assembly is about to perform a potentially dangerous action, the method must first perform a security demand to check that the action is permitted by policy.
As a practical matter, note that because the Policy Manager assigns grant sets to assemblies, a sequence of successive frames on the stack corresponding to code loaded from the same assembly will have the same set of granted permissions. Consequently, the security system needs only check permission grants at every assembly transition on the stack, not every method transition. Keep this behavior in mind when deciding how you want to organize your code into various assemblies; a poor design that includes a very chatty interface across two assemblies can slow down the performance of the security system.
In this article, we have demonstrated how the various components of the .NET Framework security system work together to provide a robust execution environment for semitrusted code. Assembly permission requests specified by code authors at development time can influence the permissions granted to an assembly by the Policy Manager. Similarly, the use of the Common Language Runtime's strong naming facilities when the assembly is authored provides execution-time integrity protection and can also influence the set of granted permissions. Deployment-time tools, such as PEVerify, allow administrators to check that an assembly is type-safe. When the assembly is loaded into an instance of the Common Language Runtime to execute, we showed how the Policy Manager determines the permission grants for the assembly from the set of evidence known about the assembly. Runtime security checks are performed against these granted permissions.
About the AuthorBrian A. LaMacchia is the Development Lead for the .NET Framework Security at Microsoft Corporation in Redmond, WA, a position he has held since April 1999. Previously, Dr. LaMacchia was the Program Manager for core cryptography in Windows 2000 and, prior to joining Microsoft in 1997, he was a Senior Member of Technical Staff in the Public Policy Research Group at AT&T Labs — Research in Florham Park, NJ. He received S.B., S.M., and Ph.D. degrees in Electrical Engineering and Computer Science from MIT in 1990, 1991, and 1996, respectively. He is the coauthor of .NET Framework Security along with Sebastian Lange, Matthew Lyons, Rudi Martin, and Kevin T. Price. Their book is published by Addison-Wesley.
Page 3 of 3