February 28, 2021
Hot Topics:

Environment Mapping Techniques

  • By Addison Wesley
  • Send Email »
  • More Articles »

7.2: Reflective Environment Mapping

Let's start with the most common use of environment mapping: creating a chrome-like reflective object. This is the bare-bones application of the technique, yet it already produces nice results, as shown in Figure 7-4.

In this example, the vertex program computes the incident and reflected rays. It then passes the reflected ray to the fragment program, which looks up the environment map and uses it to add a reflection to the fragment's final color. To make things more interesting, and to make our example more like a real application, we blend the reflection with a decal texture. A uniform parameter called reflectivity allows the application to control how reflective the material is.

You might wonder why we don't use the fragment program to calculate the reflection vector. A reflection vector computed per-fragment by the fragment program would deliver higher image quality, but it wouldn't work on basic fragment profiles. Therefore, we leave the per-fragment implementation as an exercise for you. Later in this chapter, we discuss the trade-offs and implications of using the vertex program versus using the fragment program.

Click here for a larger image.

Figure 7-4. Reflective Environment Mapping

7.2.1: Application-Specified Parameters

Table 7-1 lists the data that the application needs to provide to the graphics pipeline.

Table 7-1. Application-Specified Parameters for Per-Vertex Environment Mapping

ParameterVariable NameType

Object-space vertex positionpositionfloat4
Object-space vertex normalnormalfloat3
Texture coordinatestexCoordfloat2

Concatenated modelview and projection matricesmodelViewProjfloat4x4
Object space to world space transformmodelToWorldfloat4x4

Decal texturedecalMapsampler2D
Environment mapenvironmentMapsamplerCUBE
Eye position (in world space)eyePositionWfloat3

7.2.2: The Vertex Program

Example 7-1 gives the vertex program that performs the per-vertex reflection vector computation for environment mapping.

Basic Operations

The vertex program starts with the mundane operations: transforming the position into clip space and passing through the texture coordinate set for the decal texture.

oPosition = mul(modelViewProj, position);oTexCoord = texCoord;void C7E1v_reflection(float4 position  : POSITION,                      float2 texCoord  : TEXCOORD0,                      float3 normal    : NORMAL,                  out float4 oPosition : POSITION,                  out float2 oTexCoord : TEXCOORD0,                  out float3 R         : TEXCOORD1,              uniform float3   eyePositionW,              uniform float4x4 modelViewProj,              uniform float4x4 modelToWorld){  oPosition = mul(modelViewProj, position);  oTexCoord = texCoord;  // Compute position and normal in world space  float3 positionW = mul(modelToWorld, position).xyz;  float3 N = mul((float3x3   )modelToWorld, normal);  N = normalize(N);  // Compute the incident and reflected vectors  float3 I = positionW  eyePositionW;  R = reflect(I, N);}

Example 7-1. The C7E1v_reflection Vertex Program

Transforming the Vectors into World Space

Environment maps are typically oriented relative to world space, so you need to calcu-late the reflection vector in world space (or whatever coordinate system orients the environment map). To do that, you must transform the rest of the vertex data into world space. In particular, you need to transform the vertex position and normal by multiplying them by the modelToWorld matrix:

float3 positionW = mul(modelToWorld, position).xyz;float3 N = mul((float3x3)modelToWorld, normal);

The modelToWorld matrix is of type float4x4, but we require only the upper 3 X 3 section of the matrix when transforming a normal. Cg allows you to cast larger matrices to smaller matrices, as in the previous code. When you cast a larger matrix to a smaller matrix type, such as a float4x4 matrix cast to a float3x3 matrix, the upper left portion of the larger matrix fills in the matrix of the smaller type. For example, if you had a float4x4 matrix M:

and you cast it to a float3x3 matrix, you would end up with the matrix N:

Recall from Chapter 4 (Section 4.1.3) that the modeling transform converts object-space coordinates to world-space coordinates. In this example, we assume that the modeling transform is affine (rather than projective) and uniform in its scaling (rather than nonuniformly scaling x, y, and z). We also assume that the w component of position is 1, even though position is defined to be a float4 in the prototype for C7E1v_reflection.

These assumptions are commonly true, but if they do not hold for your case, here is what you need to do.

If the modeling transform scales positions nonuniformly, you must multiply normal by the inverse transpose of the modeling matrix (modelToWorldInvTrans), rather than simply by modelToWorld. That is:

float3 N = mul((float3x3)modelToWorldInvTrans, normal);

If the modeling transform is projective or the w component of the object-space position is not 1, you must divide positionW by its w component. That is:

positionW /= positionW.w;

The /= operator is an assignment operator, like the one in C and C++, which in this case divides positionW by positionW.w and then assigns the result to positionW.

Normalizing the Normal

The vertex normal needs to be normalized:

N = normalize(N);
In certain cases, we can skip this normalize function call. If we know that the upper 3 X 3 portion of the modelToWorld matrix causes no nonuniform scaling and the object-space normal parameter is guaranteed to be already normalized, the normalize call is unnecessary.
Calculating the Incident Vector

The incident vector is the opposite of the view vector used in Chapter 5 for specular lighting. The incident vector is the vector from the eye to the vertex (whereas the view vector is from the vertex to the eye). With the world-space eye position (eyePositionW) available as a uniform parameter and the world-space vertex position (positionW) available from the previous step, calculating the incident vector is a simple subtraction:

float3 I = positionW  eyePositionW;
Calculating the Reflection Vector

You now have the vectors you need—the position and normal, both in world space—so you can calculate the reflection vector:

float3 R = reflect(I, N);

Next, the program outputs the reflected world-space vector R as a three-component texture coordinate set. The fragment program example that follows will use this texture coordinate set to access a cube map texture containing an environment map.

Normalizing Vectors

You might be wondering why we did not normalize I or R. Normalization is not needed here because the reflected vector is used to query a cube map. The direction of the reflected vector is all that matters when accessing a cube map. Regardless of its length, the reflected ray will intersect the cube map at exactly the same location.

And because the reflect function outputs a reflected vector that has the same length as the incident vector as long as N is normalized, the incident vector's length doesn't matter either in this case.

There is one more reason not to normalize R. The rasterizer interpolates R prior to use by the fragment program in the next example. This interpolation is more accurate if the per-vertex reflection vector is not normalized.

7.2.3: The Fragment Program

Example 7-2 shows a fragment program that is quite short, because the C7E1v_reflection vertex program already took care of the major calculations. All that's left are the cube map lookup and the final color calculation.

void C7E2f_reflection(float2 texCoord : TEXCOORD0,                      float3 R        : TEXCOORD1,                  out float4 color    : COLOR,              uniform float reflectivity,              uniform sampler2D decalMap,              uniform samplerCUBE environmentMap){  // Fetch reflected environment color  float4 reflectedColor = texCUBE(environmentMap, R);  // Fetch the decal base color  float4 decalColor = tex2D(decalMap, texCoord);  color = lerp(decalColor, reflectedColor, reflectivity);}

Example 7-2. The C7E2f_reflection Fragment Program

The fragment program receives the interpolated reflected vector that it uses to obtain the reflected color from the environment map:

  float4 reflectedColor = texCUBE(environmentMap, R);

Notice the new texture lookup function texCUBE. This function is used specifically for accessing cube maps, and so it interprets the second parameter (which is a three-component texture coordinate set) as a direction.

At this point, you could assign reflectedColor to color, making the rendered object completely reflective. However, no real material is a perfect reflector, so to make things more interesting, the program adds a decal texture lookup, and then mixes the decal color with the reflected color:

  float4 decalColor = tex2D(decalMap, texCoord);  color = lerp(decalColor, reflectedColor, reflectivity);

The lerp function performs linear interpolation, as you have seen before in Section 3.3.5. The parameters to lerp are decalColor, reflectedColor, and reflectivity. So, when reflectivity is 0, your program writes out just the decal color and shows no reflection. In contrast, when reflectivity is 1, the program writes out just the reflected color, producing a completely reflective, chrome-like appearance. Intermediate values of reflectivity result in a decaled model that has some reflective qualities.

7.2.4: Control Maps

In this example, reflectivity is a uniform parameter. The assumption is that each piece of geometry in the scene has the same reflectivity over its entire surface. But this doesn't necessarily have to be the case! You can create more interesting effects by encoding reflectivity in a texture. This approach allows you to vary the amount of reflectivity at each fragment, which makes it easy to create objects with both reflective and nonreflective parts.

Because the idea of using a texture to control shading parameters is so powerful, we call such a texture a control map. Control maps are especially important because they leverage the GPU's efficient texture manipulation capabilities. In addition, control maps give artists increased control over effects without having to have a deep understanding of the underlying programs. For example, an artist could paint a "reflectivity map" without understanding how environment mapping works.

Control maps are an excellent way to add detail and complexity to almost any program.

7.2.5: Vertex Program vs. Fragment Program

We mentioned previously that you could achieve higher image quality by using the fragment program (instead of the vertex program) to calculate the reflected vector. Why is this? It is for the same reason that per-fragment lighting looks better than per-vertex lighting.

As with specular lighting, the reflection vector for environment mapping varies in a nonlinear way from fragment to fragment. This means that linearly interpolated per-vertex values will be insufficient to capture accurately the variation in the reflection vector. In particular, subtle per-vertex artifacts tend to appear near the silhouettes of objects, where the reflection vector changes rapidly within each triangle. To obtain more accurate reflections, move the reflection vector calculation to the fragment program. This way, you explicitly calculate the reflection vector for each fragment instead of interpolating it.

Despite this additional accuracy, per-fragment environment mapping may not improve image quality enough to justify the additional expense. As explained earlier in the chapter, most people are unlikely to notice or appreciate the more correct reflections at glancing angles. Keep in mind that environment mapping does not generate physically correct reflections to begin with.

Page 2 of 5

This article was originally published on March 24, 2003

Enterprise Development Update

Don't miss an article. Subscribe to our newsletter below.

Thanks for your registration, follow us on our social networks to keep up-to-date