# Environment Mapping Techniques

### 7.2: Reflective Environment Mapping

Let's start with the most common use of environment mapping: creating a chrome-like reflective object. This is the bare-bones application of the technique, yet it already produces nice results, as shown in Figure 7-4.

In this example, the vertex program computes the incident and reflected rays. It then passes the reflected ray to the fragment program, which looks up the environment map and uses it to add a reflection to the fragment's final color. To make things more interesting, and to make our example more like a real application, we blend the reflection with a decal texture. A uniform parameter called ` reflectivity` allows the application to control how reflective the material is.

*Click here for a larger image.*

**Figure 7-4. Reflective Environment Mapping**

#### 7.2.1: Application-Specified Parameters

Table 7-1 lists the data that the application needs to provide to the graphics pipeline.

**Table 7-1. Application-Specified Parameters for Per-Vertex Environment Mapping**

Parameter | Variable Name | Type |

VERTEX PROGRAM VARYING PARAMETERS | ||

Object-space vertex position | position | float4 |

Object-space vertex normal | normal | float3 |

Texture coordinates | texCoord | float2 |

VERTEX PROGRAM UNIFORM PARAMETERS | ||

Concatenated modelview and projection matrices | modelViewProj | float4x4 |

Object space to world space transform | modelToWorld | float4x4 |

FRAGMENT PROGRAM UNIFORM PARAMETERS | ||

Decal texture | decalMap | sampler2D |

Environment map | environmentMap | samplerCUBE |

Eye position (in world space) | eyePositionW | float3 |

Reflectivity | reflectivity | float |

#### 7.2.2: The Vertex Program

Example 7-1 gives the vertex program that performs the per-vertex reflection vector computation for environment mapping.

##### Basic Operations

The vertex program starts with the mundane operations: transforming the position into clip space and passing through the texture coordinate set for the decal texture.

oPosition =mul(modelViewProj, position);oTexCoord = texCoord;voidC7E1v_reflection(float4position :POSITION,float2texCoord :TEXCOORD0,float3normal :NORMAL, outfloat4 oPosition:POSITION, outfloat2oTexCoord :TEXCOORD0, outfloat3R :TEXCOORD1,uniform float3eyePositionW,uniform float4x4modelViewProj,uniform float4x4modelToWorld){ oPosition =mul(modelViewProj, position); oTexCoord = texCoord; // Compute position and normal in world spacefloat3positionW =mul(modelToWorld, position).xyz;float3N =mul((float3x3)modelToWorld, normal); N =normalize(N); // Compute the incident and reflected vectorsfloat3I = positionW eyePositionW; R =reflect(I, N);}

**Example 7-1. The** `C7E1v_reflection` **Vertex Program**

##### Transforming the Vectors into World Space

Environment maps are typically oriented relative to world space, so you need to calcu-late the reflection vector in world space (or whatever coordinate system orients the environment map). To do that, you must transform the rest of the vertex data into world space. In particular, you need to transform the vertex position and normal by multiplying them by the ` modelToWorld` matrix:

Thefloat3positionW =mul(modelToWorld, position).xyz;float3N =mul((float3x3)modelToWorld, normal);

`matrix is of type`

**modelToWorld**`, but we require only the upper 3 X 3 section of the matrix when transforming a normal. Cg allows you to cast larger matrices to smaller matrices, as in the previous code. When you cast a larger matrix to a smaller matrix type, such as a`

**float4x4**`matrix cast to a`

**float4x4**`matrix, the upper left portion of the larger matrix fills in the matrix of the smaller type. For example, if you had a`

**float3x3**`matrix`

**float4x4***M*:

and you cast it to a ` float3x3` matrix, you would end up with the matrix

*N*:

Recall from Chapter 4 (Section 4.1.3) that the modeling transform converts object-space coordinates to world-space coordinates. In this example, we assume that the modeling transform is affine (rather than projective) and uniform in its scaling (rather than nonuniformly scaling x, y, and z). We also assume that the w component of is 1, even though position is defined to be a position in the prototype for float4.C7E1v_reflection |

These assumptions are commonly true, but if they do not hold for your case, here is what you need to do.

If the modeling transform scales positions nonuniformly, you must multiply ` normal` by the inverse transpose of the modeling matrix (

`), rather than simply by`

**modelToWorldInvTrans**`. That is:`

**modelToWorld**float3N =mul((float3x3)modelToWorldInvTrans, normal);

If the modeling transform is projective or the *w* component of the object-space ` position` is not 1, you must divide

`by its`

**positionW***w*component. That is:

positionW /= positionW.w;

The ` /=` operator is an assignment operator, like the one in C and C++, which in this case divides

`by`

**positionW**`and then assigns the result to`

**positionW.w**`.`

**positionW**##### Normalizing the Normal

The vertex normal needs to be normalized:

N =(N);normalize

In certain cases, we can skip this function call. If we know that the upper 3 X 3 portion of the normalize matrix causes no nonuniform scaling modelToWorldand the object-space normal parameter is guaranteed to be already normalized, call is unnecessary.the normalize |

##### Calculating the Incident Vector

The incident vector is the opposite of the view vector used in Chapter 5 for specular lighting. The *incident vector* is the vector from the eye to the vertex (whereas the view vector is from the vertex to the eye). With the world-space eye position (` eyePositionW`) available as a uniform parameter and the world-space vertex position (

`) available from the previous step, calculating the incident vector is a simple subtraction:`

**positionW**float3I = positionW eyePositionW;

##### Calculating the Reflection Vector

You now have the vectors you need—the position and normal, both in world space—so you can calculate the reflection vector:

float3R =reflect(I, N);

Next, the program outputs the reflected world-space vector ` R` as a three-component texture coordinate set. The fragment program example that follows will use this texture coordinate set to access a cube map texture containing an environment map.

##### Normalizing Vectors

You might be wondering why we did not normalize ` I` or

`. Normalization is not needed here because the reflected vector is used to query a cube map. The direction of the reflected vector is all that matters when accessing a cube map. Regardless of its length, the reflected ray will intersect the cube map at exactly the same location.`

**R**And because the ` reflect` function outputs a reflected vector that has the same length as the incident vector as long as

`is normalized, the incident vector's length doesn't matter either in this case.`

**N**There is one more reason not to normalize ` R`. The rasterizer interpolates

`prior to use by the fragment program in the next example. This interpolation is more accurate if the per-vertex reflection vector is not normalized.`

**R**#### 7.2.3: The Fragment Program

Example 7-2 shows a fragment program that is quite short, because the ` C7E1v_reflection` vertex program already took care of the major calculations. All that's left are the cube map lookup and the final color calculation.

voidC7E2f_reflection(float2texCoord :TEXCOORD0,float3R :TEXCOORD1, outfloat4color :COLOR,uniform floatreflectivity,uniform sampler2DdecalMap,uniform samplerCUBEenvironmentMap){ // Fetch reflected environment colorfloat4reflectedColor =texCUBE(environmentMap, R); // Fetch the decal base colorfloat4decalColor =tex2D(decalMap, texCoord); color =lerp(decalColor, reflectedColor, reflectivity);}

**Example 7-2. The** `C7E2f_reflection` **Fragment Program**

The fragment program receives the interpolated reflected vector that it uses to obtain the reflected color from the environment map:

float4reflectedColor =texCUBE(environmentMap, R);

Notice the new texture lookup function ` texCUBE`. This function is used specifically for accessing cube maps, and so it interprets the second parameter (which is a three-component texture coordinate set) as a direction.

At this point, you could assign ` reflectedColor` to

`, making the rendered object completely reflective. However, no real material is a perfect reflector, so to make things more interesting, the program adds a decal texture lookup, and then mixes the decal color with the reflected color:`

**color**float4decalColor =tex2D(decalMap, texCoord); color =lerp(decalColor, reflectedColor, reflectivity);

The ` lerp` function performs linear interpolation, as you have seen before in Section 3.3.5. The parameters to

`are`

**lerp**`,`

**decalColor**`, and`

**reflectedColor**`. So, when`

**reflectivity**`is 0, your program writes out just the decal color and shows no reflection. In contrast, when`

**reflectivity**`is 1, the program writes out just the reflected color, producing a completely reflective, chrome-like appearance. Intermediate values of`

**reflectivity**`result in a decaled model that has some reflective qualities.`

**reflectivity**#### 7.2.4: Control Maps

In this example, is a uniform parameter. The assumption is that each piece of geometry in the scene has the same reflectivity over its entire surface. But this doesn't necessarily have to be the case! You can create more interesting effects by encoding reflectivity in a texture. This approach allows you to vary the amount of reflectivity at each fragment, which makes it easy to create objects with both reflective and nonreflective parts.reflectivity |

Because the idea of using a texture to control shading parameters is so powerful, we call such a texture a *control map*. Control maps are especially important because they leverage the GPU's efficient texture manipulation capabilities. In addition, control maps give artists increased control over effects without having to have a deep understanding of the underlying programs. For example, an artist could paint a "reflectivity map" without understanding how environment mapping works.

Control maps are an excellent way to add detail and complexity to almost any program.

#### 7.2.5: Vertex Program vs. Fragment Program

We mentioned previously that you could achieve higher image quality by using the fragment program (instead of the vertex program) to calculate the reflected vector. Why is this? It is for the same reason that per-fragment lighting looks better than per-vertex lighting.

As with specular lighting, the reflection vector for environment mapping varies in a nonlinear way from fragment to fragment. This means that linearly interpolated per-vertex values will be insufficient to capture accurately the variation in the reflection vector. In particular, subtle per-vertex artifacts tend to appear near the silhouettes of objects, where the reflection vector changes rapidly within each triangle. To obtain more accurate reflections, move the reflection vector calculation to the fragment program. This way, you explicitly calculate the reflection vector for each fragment instead of interpolating it.

Despite this additional accuracy, per-fragment environment mapping may not improve image quality enough to justify the additional expense. As explained earlier in the chapter, most people are unlikely to notice or appreciate the more correct reflections at glancing angles. Keep in mind that environment mapping does not generate physically correct reflections to begin with.

Page 2 of 5