Microsoft & .NET.NETSecurity Through the Lifetime of a Managed Process: Fitting It All Together

Security Through the Lifetime of a Managed Process: Fitting It All Together content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

In this article, we will focus on how the individual pieces of the security system come together and interact to provide a secure environment for executing semitrusted code. After reading this article, you should be able to

  • Describe the security actions that must be made by developers at code authoring time, including declarative permission requests and appropriate permission demands
  • Describe the various mechanisms by which managed code can be installed onto a particular machine
  • Describe the function of the Native Image Generator and PE Verify tools and their relationship to the security system
  • Describe the roles the loader, the policy system, and the Just-In-Time compiler/verifier play in the CLR security system

The lifecycle of any particular managed process can be divided into three distinct stages—development, deployment, and execution. Software authors, administrators, and users make security decisions at each stage of the process that ultimately determine the permissions with which an assembly runs on a particular machine. We begin this article with an overview of the security decisions that face developers at code authoring time and then proceed to deployment and execution-time considerations in later sections.

Development-Time Security Considerations

The security features within the .NET Framework were designed, in part, to make it much easier for developers to write secure code. When authoring code, developers need to consider two main factors—the security requirements of the assemblies they are authoring and the sensitive resources and data (if any) that are potentially exposed by their classes to other code. The two factors are related but distinct, and it is slightly easier to understand the relationship between them if we begin with a discussion of the second factor, protecting sensitive resources, and then go back to investigate how developers indicate and declare security requirements of their assemblies.

The first security-related action a developer must perform when beginning work on a new assembly is to determine whether the assembly will expose any sensitive resources through its classes and methods? That is, will the classes and methods within the assembly expose sensitive resources to callers of those methods. If the answer to this question is yes, the assembly must be a secure assembly. Secure assemblies are discussed in detail in our book, .NET Framework Security but the basic issue is this—if the assembly you are authoring is going to make a new sensitive resource available to semitrusted code, your assembly must perform appropriate security checks within each method that provides access to or operates on the sensitive resource. Essentially, your new assembly is going to be a gatekeeper or guard of the protected resource, and you must treat every request for access to the resource with an appropriate degree of caution.

How do you determine whether your new assembly must be a secure assembly? This basic determination revolves around the list of resources exposed by your assembly to other code and whether those resources are sensitive or already protected. Consider the following scenario. Suppose that you want to write a method that will write a message (we’ll use “Hello, World!” for historical reasons) to a file named hello.txt located on the C: drive of the computer. Using the .NET Framework, your code might look as shown in Listing 1. This program creates a FileStream object mapped to the C:hello.txt file on the hard disk (creating the file if necessary) and writes the string "Hello,World!" to that file.

LISTING 1 Sample Hello, World! Program

using System;using System.IO;public class HelloWorld {  public static void Main(string [] args)){    FileStream fs = new FileStream("C:hello.txt ",                                   FileMode.OpenOrCreate,                                   FileAccess.Write);    StreamWriter sw = new StreamWriter(fs);    sw.Write("Hello, World!");    sw.Close();  }}

Does the program in Listing 1 constitute a secure assembly? That is, does this simple program require the addition of any security checks or permission demands? The answer is “No, it does not,” because the program, by itself, does not expose any new sensitive resources. The only resource that is used or modified by the HelloWorld program is the c:hello.txt file that is associated with the FileStream fs, and the FileStream class itself performs the necessary security checks to determine whether callers of its methods (including the HelloWorld program) should be granted access to the file system objects that it exposes.

The class libraries that make up the .NET Framework are secure assemblies; they implement appropriate security checks, in the form of permission demands, for the resources that they expose. Every sensitive resource that is made available to semi-trusted code through the .NET Framework class library is protected by demands for a related security permission. For example, the constructors on the FileStream class demand instances of the FileIOPermission before returning any instances of the class. Similarly, the registry-related classes demand instances of RegistryPermission, and the network-related classes demand instances of SocketPermission, WebPermission, or DNSPermission as appropriate to their function. This is one of the great advantages of writing a program on top of the .NET Framework; if all the resources that you use in your programs are already protected by appropriate permission demands, you do not need to add additional security checks to your own code. Because the HelloWorld program in Listing 1 only uses resources that are exposed through the class libraries of the .NET Framework, no additional security checks need to be made in our code.


Even if your assembly does not expose any sensitive resources, if it performs any operations that affect the normal behavior of the .NET Framework security system, it must be a secure Development-Time Security Considerations 167.assembly. For example, if a method in your assembly calls the Assert() method on a permission, that modifies the behavior of the security stack walks and your assembly should be secure. Similarly, if you ever suppress the runtime security check that normally occurs when using platform invoke or COM interoperability via the SuppressUnmanagedCodeSecurityAttribute attribute, your assembly needs to be secure.

Even though our HelloWorld program does not expose any sensitive resources that require protection, it does make use of a protected resource—namely, the FileStream object that represents the c:hello.txt file. Our program will only run successfully at execution time if it is granted sufficient access rights to write to the c:hello.txt file. We can indicate this security requirement for our assembly to run through the use of assembly-level declarative permission requests. Basically, declarative security attributes are a mechanism for communicating assembly permission requirements to the policy system. Referring back to Listing 1, because HelloWorld will only operate correctly if it is granted write access to the c:hello.txt file, we can indicate this requirement by adding the following assembly attribute to our source code:

[assembly:System.Security.Permissions.FileIOPermission(System.Security.Permissions.SecurityAction.RequestMinimum,Write="C:hello.txt ")]

This attribute indicates that a minimum grant of the FileIOPermission, including write access to c:hello.txt, is required for the program to run.


If you determine that your assembly will be exposing a sensitive resource, you must secure that access with appropriate permission demands.


There is a subtle interaction that occurs between the Runtime and your source code compiler when you use declarative security attributes within your programs. At compile time, declarative security attributes are checked for correctness by the version of the Runtime installed with your compiler and converted into a different formation before being embedded in the metadata of the output module or assembly.

The final important security-related decision that you must make at code authoring time is whether you want your assembly to have a strong name. Strong names are cryptographically protected names for assemblies. Strong names are built on top of public key cryptography. Strong names are used by the CLR to provide both integrity protection for your assemblies as well as cryptographically strong binding among assemblies. See our book for a description on how to use the strong name tool (Sn.exe) distributed with the .NET Framework SDK to create strong name key pairs and how to build strong names into your assemblies using the AssemblyKeyFile, AssemblyKeyName, and AssemblyDelaySign assembly-level custom attributes.


We recommend that all developers take advantage of the strong name features of the CLR and digitally sign their assemblies with strong names. Only strongly named assemblies can be added to the Global Assembly Cache, and strong names provide a very high degree of protection against accidental or malicious tampering with your assemblies. Also, version checking and side-by-side execution are only available for strongly named assemblies. Note that once you strong name your assembly, you will have to annotate it with the AllowPartiallyTrustedCallersAttribute if you want it to be callable from semitrusted assemblies.

Deployment-Time Security Issues

After you have finished writing, compiling, and strong name signing your assembly, you must deploy it to the machines on which you want it to run. Traditionally, deploying Windows software has consisted of

  1. Combining complied code and installation instructions into an installation package
  2. Copying the package onto the target machines
  3. Running the package to place the included compiled code in the proper directories on the machine and perform housekeeping tasks such as registry key configuration

While this particular method of deployment is still supported, the .NET Framework also supports over-the-network deployment, dynamic loading of code, and assembly sharing through the Global Assembly Cache. Deployment scenarios and features are described in detail in the .NET Framework SDK documentation in the section titled “Deploying Applications.”

The particular method you choose to deploy your application is not a security decision per se, but it may impact the security context in which your application runs. Specifically, the default security policy shipped with the .NET Framework is based on the Internet Explorer Zones model, so the set of permissions granted to your assembly will vary depending on what Zone it is located in when it is loaded into the CLR. For example, assemblies downloaded as part of a Web-based application from a Web server located on your local intranet are likely to be granted fewer permissions than when installed in a directory on a local hard drive.

The primary security decision facing developers and administrators when deploying managed applications is whether to add their assemblies to the Global Assembly Cache (GAC). Assemblies that are present in the GAC are potentially accessible to any other assembly running on the machine, including semitrusted code. For example, an assembly running from a Web server in a semitrusted context can instantiate types located in assemblies in the GAC without having read access to the physical files that make up the assembly. (In particular, the Assembly.Load() static method always probes the GAC for the requested assembly.) For this reason, assemblies that have not been properly secured (either with appropriate security permission or by preventing semitrusted assemblies from binding to the shared assembly) should never be loaded into the GAC.

The .NET Framework includes a number of command-line tools that have security-related functions. The Code Access Security Policy tool, Caspol.exe, is a command-line tool that can be used to modify security policy. The Permissions View tool, Permview.exe, will display assembly-level permission requests and declarative demands contained within a specific assembly. The Strong Name (Sn.exe) and Secutil (Secul.exe) utilities are useful for strong name generation, construction, and extraction. PEVerify is a standalone tool that performs type-safety verification and metadata validation checks on an assembly; it may be used to check that classes and methods within an assembly are type-safe without loading the assembly into the CLR. All of these tools are documented in the .NET Framework SDK documentation in the section titled “Tools and Debugger.”

One other tool that, while not directly related to security, is impacted by security operations is the Native Image Generator (Ngen.exe). The Native Image Generator tool (sometimes called the “pre-JITer”) creates native code from a managed assembly and caches the native code locally. When an assembly is Ngen’ed (processed by the Native Image Generator), any LinkDemand security checks encountered are evaluated respective to the set of permissions that would be granted to the assembly under the current security policy. If the security policy later changes, the native image may be invalidated. Specifically, if the set of permissions granted to an Ngen‘ed assembly under the new policy is not a superset of those granted under the old policy, the native image generated under the old security policy will be invalidated and ignored, and the assembly will be Just-In-Time compiled at runtime. Thus, you may need to re-run the Ngen utility to regenerate native images for your assemblies after making modifications to the security policy that change the set of permissions granted to your assemblies.

Execution-Time Security Issues

Having compiled and deployed your assemblies to a target machine, the next step, of course, is to run your code within the Common Language Runtime. A lot of steps occur “under the covers” when you run your HelloWorld.exe managed executable. In this section, we’re going to walk through the process by which managed code contained within an assembly is loaded, evaluated by security policy, Just-In-Time compiled, type-safety verified, and finally allowed to execute. The overall process of developing, deploying, and executing managed code is depicted graphically in Figure 1. We have discussed the Development and Deployment boxes previously and will focus solely on Execution in this section.

High-level diagram of the process

FIGURE 1 High-level diagram of the process of developing, deploying, and executing managed code.

The diagram in Figure 1 shows how an individual assembly is loaded and executed within the Runtime, but there is a key initial step that must occur before loading any assemblies. Every managed application is run on top of the Runtime within the context of a host. The host is the trusted piece of code that is responsible for launching the Runtime, specifying the conditions under which the Runtime (and.thus managed code within the Runtime) will execute, and controlling the transition to managed code execution. The .NET Framework includes a shell host launching executables from the command line, a host that plugs into Internet Explorer that allows managed objects to run within a semitrusted browser context, and the ASP.NET host for Web applications. After the host has initialized the Runtime, the assembly containing the entry point for the application must be loaded and then control can be transferred to that entry point to begin executing the application. (In the case of shell-launched C# executables, this entry point is the Main method defined in your application.)

Loading an Assembly

Referring to Figure 1, the first step that occurs when loading an assembly into the Runtime is to locate the desired assembly. Typically, assemblies are located on disk or downloaded over the network, but it is also possible for an assembly to be “loaded” dynamically from a byte array. In any case, after the bytes constituting the assembly are located, they are handed to the Runtime’s Assembly Loader. The Assembly Loader parses the contents of the assembly and creates the data structures that represent the contents of the assembly to the Runtime. Control then passes to the Policy Manager.


When the Assembly Loader is asked to resolve a reference to an assembly, the reference may be either a simple reference, consisting of just the “friendly name” of an assembly, or a “strong” reference that uses the cryptographic strong name of the referenced assembly. Strong references only successfully resolve if the target assembly has a cryptographically valid strong name (that is, it was signed with the private key corresponding to the public key in the strong name and has not been tampered with since being signed). For performance reasons, assemblies loaded from the Global Assembly Cache are strong name verified only when they are inserted into the GAC; the Runtime depends on the underlying operating system to keep the contents of the GAC secure.

The Assembly Loader is also responsible for resolving file references within a single assembly. An assembly always consists of at least a single file, but that file can contain references to subordinate files that together constitute a single assembly. Such file references are contained within the “assembly manifest” that is stored in the first file in the assembly (the file that is externally referenced by other assemblies). Every file reference contained within the manifest includes the cryptographic hash of the contents of the referenced file. When a file reference needs to be resolved, the Assembly Loader finds the secondary file, computes its hash value, compares that hash value to the value stored in the manifest, and (assuming the match) loads the subordinate file. If the hash values do not match, the subordinate file has been tampered with after the assembly was linked together and the load of the subordinate file fails.

Resolving Policy for an Assembly

The Policy Manager is a core component of the Runtime security system. Its job is to decide what permissions should be granted, in accordance with the policy specification, to every single assembly loaded by the Runtime. Before any managed code from an assembly is executed, the Policy Manager has determined whether the code should be allowed to run at all and, if it is allowed to run, the set of rights with which it will run. Figure 2 provides a high-level view of the operation of the Policy Manager.

High-level diagram of Policy Manager

FIGURE 2 High-level diagram of the Policy Manager.

There are three distinct inputs to the Policy Manager:

  • The current security policy
  • The evidence that is known about the assembly
  • The set of permission requests, if any, that were made in assembly-level metadata declarations by the assembly author

The security policy is the driving document; it is the specification that describes in detail what rights are granted to an assembly. The security policy in effect for a particular application domain at any point in time consists of four policy levels:

  • Enterprise-wide level
  • Machine-wide level
  • User-specific level
  • An optional, application domainspecific level

Each policy level consists of a tree of code groups and membership conditions, and it is against this tree that the evidence is evaluated. Each policy level is evaluated independently to arrive at a set of permissions that the level would grant to the assembly. The intersection of these level grants creates the maximal set of permissions that can be granted by the Policy Manager.

The second input to the Policy Manager is the evidence, the set of facts that are known about the assembly. There are two types of evidence that are provided to the policy system—implicitly trusted evidence and initially untrusted evidence. Implicitly trusted evidence consists of facts that the Policy Manager assumes to be true either because the Policy Manager computed those facts itself or because the facts were supplied by the trusted host that initialized the Runtime (“host-provided evidence”). (An example of the former type of implicitly trusted evidence is the presence of a cryptographically valid strong name or Authenticode signature. The URI from which an assembly was loaded is an example of the latter type of implicitly trusted evidence.) Initially untrusted evidence consists of facts that were embedded in the assembly (“assembly provided evidence”) at development time by the code author; they must be independently verified before being used or believed.


The default security policy that is installed by the .NET Framework never considers initially untrusted evidence, but the facility is present as an extension point. For example, third-party certifications of an assembly might be carried as assembly provided evidence.

The third and final input to the Policy Manager is the set of permission requests made by the assembly. These declarations provide hints to the Policy Manager concerning

  • The minimum set of permissions that the assembly must be granted to function properly
  • The set of permissions that the assembly would like to be granted but are not strictly necessary for minimal operation
  • The set of permissions that the assembly never wants to be granted by the policy system

After computing the maximal set of permissions that can be granted to the assembly in accordance with the policy levels, the Policy Manager can reduce that set based on the contents of the permission requests.


Permission requests never increase the set of permissions granted to an assembly beyond the maximal set determined by the policy levels.

After the Policy Manager has determined the in-effect security policy, the set of applicable evidence, and the set of permission requests, it can compute the actual set of permissions to be associated with the assembly. Recall that permissions are granted by the policy system on an assembly-wide basis; all code within an assembly is granted the same set of rights when that assembly is loaded into an application domain. After the set of granted permissions has been computed by the Policy Manager, that set is associated with the Runtime-internal objects that represent the assembly, and individual classes contained within the assembly can be accessed.

Before leaving the Policy Manager, we should mention that there are two security conditions enforced by the Policy Manager that could cause the load of the assembly to fail at this point and never proceed to the Class Loader. The first condition concerns the assembly’s set of minimum permission requests. If an assembly contains any minimum permission requests, these requests must be satisfied by the grant set that is output by the policy system for processing of the assembly to continue. If the minimum request set is not a subset of the resulting grant set, a PolicyException is immediately thrown. The second condition that is checked before proceeding is that the assembly has been granted the right to execute code. “The right to run code” on top of the Runtime is represented by an instance of the SecurityPermission class (specifically, SecurityPermission(SecurityPermissionFlag.Execution)). Under default policy, the Policy Manager will check that the set of permissions granted to an assembly contains at least an instance of this flavor of SecurityPermission. If the right to run code is not granted for some reason, that also generates a PolicyException and no further processing of the assembly occurs.

Loading Classes from an Assembly

Assuming that the assembly’s minimum requests have been satisfied and that the assembly is indeed granted the right to run code, control passes from the Policy Manager to the Class Loader. Classes are retrieved lazily from the containing assembly; if you access a single class from an assembly containing a hundred classes, only Execution-Time Security Issues 175.that single class is touched by the Class Loader. The Class Loader is responsible for laying out in memory the method tables and data structures associated with the class and verifying access (visibility) rules for classes and interfaces. After the class data structures have been properly initialized, we are ready to access individual methods defined within the class.

Just-In-Time Verification and Compilation of Methods

Referring back to Figure 1, after the Class Loader has finished its work, we are ready to verify the MSIL container within the assembly and generate native code from it. This is the job of the Just-In-Time compiler and type-safety verifier (also known as the JIT compiler/verifier). As with loading classes from an assembly, methods within a class are JIT verified and compiled lazily on an as-demanded basis. When a method is called for the first time within the process, the MSIL for the method is checked for compliance with the published type-safety rules and then (assuming it passes) converted into native code.

The JIT compiler/verifier also plays a direct role in the evaluation of class- and method-level declarative security actions. Recall that there are three types of declarative permission demands, represented by the SecurityAction enumeration values Demand, LinkDemand, and InheritanceDemand. Inheritance demands, which control the right to subclass a class, and link-time demands, which restrict the right to bind to a method, are checked and enforced by the JIT as the class is being compiled. Failure to satisfy an inheritance or link-time demand will result in the generation of a SecurityException. Runtime demands (SecurityAction.Demand) are converted by the JIT compiler/verifier into a native code wrapper around the body of the method that is protected by the demand. Every call to the method must first pass through the wrapper, satisfying the security demand it represents, before entering the body of the method.

After a method has been successfully processed by the JIT compiler/verifier and converted to native code, it is ready to be executed. As the method executes, references to unprocessed methods, classes, and assemblies can occur. These references will cause the Runtime to recursively call the JIT compiler/verifier, Class Loader, or Assembly Loader and Policy Manager as necessary. These operations happen implicitly, “under the covers,” as the managed code within the application executes.

Execution-Time Permission Enforcement

So far, everything that we have described in this section concerning the operation of the execution engine is largely transparent to the code author and the user. Assembly references are resolved, assemblies are loaded, policy evaluation occurs, classes are laid out, and MSIL is verified and converted into native code, all without any external indication or control. (Of course, in the event that an error occurs at any stage in this processing pipeline, such as the failure of an inheritance demand in the JIT compiler/verifier, an exception will be generated and program execution will not proceed normally.) However, essentially all of the security processing that occurs from assembly load through JIT compilation/verification really exists primarily to set up the execution environment for execution-time permission enforcement.

Execution-time permission enforcement, the final security-related component in Figure 1, is the raison d’être of the .NET Framework security system. All of the evidence gathering, policy specification, and evaluation that has occurred up to this point was performed simply so that we can associate a permission grant set with each assembly at execution time. We need that association between assemblies and their grant sets so that we can properly perform and evaluate stack-walking security demands. As a program runs within the Runtime, security demands will generally occur as a result of one of the following three actions:

  • An explicit call to a permission’s Demand() method
  • An explicit call to a method protected by a declarative security attribute specifying a demand
  • Implicitly as part of a platform invoke or COM interop call, because all calls to native code through these mechanisms are automatically protected by security checks

The goal of a stack-walking permission check is to verify that the demanded permission is within the policy grants of all the code in the call chain. For every piece of code on the call stack above the method performing the check, the Runtime security system ensures that the code has been granted the particular demanded permission by the Policy Manager. Whenever a method within a secured assembly is about to perform a potentially dangerous action, the method must first perform a security demand to check that the action is permitted by policy.


As a practical matter, note that because the Policy Manager assigns grant sets to assemblies, a sequence of successive frames on the stack corresponding to code loaded from the same assembly will have the same set of granted permissions. Consequently, the security system needs only check permission grants at every assembly transition on the stack, not every method transition. Keep this behavior in mind when deciding how you want to organize your code into various assemblies; a poor design that includes a very chatty interface across two assemblies can slow down the performance of the security system.


In this article, we have demonstrated how the various components of the .NET Framework security system work together to provide a robust execution environment for semitrusted code. Assembly permission requests specified by code authors at development time can influence the permissions granted to an assembly by the Policy Manager. Similarly, the use of the Common Language Runtime’s strong naming facilities when the assembly is authored provides execution-time integrity protection and can also influence the set of granted permissions. Deployment-time tools, such as PEVerify, allow administrators to check that an assembly is type-safe. When the assembly is loaded into an instance of the Common Language Runtime to execute, we showed how the Policy Manager determines the permission grants for the assembly from the set of evidence known about the assembly. Runtime security checks are performed against these granted permissions.

About the Author

Brian A. LaMacchia is the Development Lead for the .NET Framework Security at Microsoft Corporation in Redmond, WA, a position he has held since April 1999. Previously, Dr. LaMacchia was the Program Manager for core cryptography in Windows 2000 and, prior to joining Microsoft in 1997, he was a Senior Member of Technical Staff in the Public Policy Research Group at AT&T Labs — Research in Florham Park, NJ. He received S.B., S.M., and Ph.D. degrees in Electrical Engineering and Computer Science from MIT in 1990, 1991, and 1996, respectively. He is the coauthor of .NET Framework Security along with Sebastian Lange, Matthew Lyons, Rudi Martin, and Kevin T. Price. Their book is published by Addison-Wesley.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories