Skip to main content

Multi-Pass Reflections

The Dragon in the Mirror

-"just the same... only things go the other way"

Let's take a look at the code for Example 2, which is a good starting point if you want to render reflections using multi-pass rendering.

The basic idea

The most common approach to rendering reflections with rasterization-based graphics is to use a multi-pass technique. The basic idea is to render the scene in two "passes".:

  • In the first pass, you render the reflected scene from a virtual camera that approximately samples the reflected rays (where to put this camera depends on the location of the current camera and the surface that is reflecting), but rather than writing the output pixels to the screen, you write them to a separate pixel buffer in memory.
  • In the second pass, you render the scene (to the screen) from the current perspective of the camera. But this time you use the output of your first pass as a texture to color the reflective surface in your scene.

In the diagram below, our strategy works by first rendering the scene from the mirror camera's perspective, then using the rendered image to texture our mirror during a second pass, where we render the scene from our camera's perspective (the blue one on the left). If we compare what a reflected ray from the second pass would look like to a transmitted ray from the first pass, we can see why this technique works.

The Example2 Scene

The best starting point for implementing reflections is probably the Example 2 Scene in the starter code. This scene already implements two rendering passes in the scene controller, with each rendering pass preceded by a function that you can customize in the scene model. The result of the first pass is used as a texture in the second pass.

Example 2 also shows how to use a different camera for each of the passes. Here is code from the Example2SceneController:

/****************** First Pass *******************/
/**
* Tell the model to prep for our first pass,
*/
this.model.prepForFirstPass(currentTextureBuffer);

/**
* Set the render target to our texture target and clear it
*/
this.setCurrentRenderTarget(currentDestinationBuffer);
context.renderer.clear();

/**
* Render the scene to texture from our model's virtualScreenCamera perspective
*/
context.renderer.render(this.getThreeJSScene(), this.getThreeJSCamera(this.model.virtualScreenCamera))


/****************** Second Pass *******************/
/**
* Setting the render target to null will make the screen our render target again
*/
this.setCurrentRenderTarget(null);

/**
* Prep for our second pass and then render it
*/
this.model.prepForSecondPass(currentDestinationBuffer);
context.renderer.clear();
context.renderer.render(this.getThreeJSScene(), this.getThreeJSCamera());

The example also shows how to change the transform and projection matrix of our second virtual screen camera. There is a toggle button in the control panel that will make the virtual screen camera move side to side, and another that will flip its image in the y direction. Here's what it looks like:

Also note that this example now makes the virtual screen camera a child of the main scene camera. You can alternatively make it a direct child of world coordinates---it's up to you. The relevant code can be found in the the initCamera() method of Example2SceneController:

/**
* We will create a virtual camera to render our virtual screen's viewpoint from
* @type {ACameraModel}
*/
this.virtualScreenCamera = new ACameraModel(ACamera.CopyOf(this.camera));
// this.addChild(this.virtualScreenCamera); // use this to make it a child of world coordinates
this.cameraModel.addChild(this.virtualScreenCamera); // use this to make it a child of the main camera

Update

Reflection is a pretty tricky feature. To make it a bit more manageable, I've added some more helper code. More specifically, I added a mirror shader that is now used in Example2 by default. The main change in the mirror shader is that instead of using the texture coordinates specified in the mirror geometry to do a texture lookup, it shows you how to look up a texture based on the current fragment's screen coordinates. Originally, when I made the graphic above in this page, I had intended for you to calculate these texture coordinates as a vertex attribute, and if you have already done this please let us know and we'll give you some extra credit for it. If you have been struggling with that, I recommend using the example code provided in the mirror shader.

To understand what the mirror shader does, try clicking on the UseNDCCoordinates and ShowMirrorTextureCoords check boxes in the control pannel of Example2 Scene:

For some insight into what is happening here, let's look at our mirror vertex shader first:

precision highp float;
precision highp int;
varying vec2 vUv;
varying vec4 vNDC;

void main() {
vUv = uv;
vNDC = projectionMatrix * modelViewMatrix * vec4(position.xyz , 1.0);
gl_Position = vNDC;
}

Here we see that varying vec4 vNDC is taking the projected position of our vertex and passing it along as a varying property to our fragment shader (meaning it will be interpolated). This lets us calculate the screen position of each fragment in our fragment shader.

uniform sampler2D inputMap;
uniform bool inputMapProvided;
varying vec4 vPosition;
varying vec2 vUv;
varying vec4 vNDC;
uniform bool useNDCCoords;
uniform bool showTextureCoords;

void main() {
// We want to sample from a texture that represents the mirrored world rendered from our current perspective.
// To do this, let's start by calculating the coordinates of our current fragment.
// More specifically, we will homogenize our normalized device coordinates to get a value that maps our screen to
// the x and y ranges [-1,1]
vec2 currentZeroCenterNormalizedScreenCoordinates = vNDC.xy/vNDC.w;
// Let's convert these to texture coordinates by shifting and scaling our range to [0,1];
vec2 textureCoordinates = currentZeroCenterNormalizedScreenCoordinates*0.5+vec2(0.5,0.5);

if(!useNDCCoords){
textureCoordinates = vUv;
}

// Let's sample the texture
vec4 textureColor = texture(inputMap, textureCoordinates);

vec4 outputColor = textureColor;
if(showTextureCoords){
outputColor = vec4(textureCoordinates, 0.0, 1.0);
}

gl_FragColor = outputColor;
}

The advantage of this is that if you use the mirror shader, then all you need to do is make sure that the rendering pass you use to render your mirror texture (the first pass) renders the scene reflected about the mirror surface. To figure out how to do this, think about how you would represent the transformation from world coordinates to the coordinate system of our mirror, where reflection is simply scaling the z dimension by -1, when transforming back into world coordinates. That combined operation represents the transform from world coordinates to "mirror world" coordinates.

Once you get it working, it should look like this: