hw5 assignment handout (v.3, Thu Nov 20 18:07) | framework code (v.4, Thu Nov 20 17:02)
support libraries: Win32 | MacOS X
hw5-v2.pdf
: added considerable material to the Framework
section, including rudimentary documentation for the UI and plenty of handy
details about the fragment and the classes you're implementing
[Thu Nov 20 18:07]
hw5-v2.pdf
: added discussion question 2; added material about
transformations in Pipeline
[Tue Nov 18 02:04]
hw5.pdf
: initial releasepipeline-v4.zip
: removed spurious "abstract" in TexturedTP,
and fixed up OpenGL rendering to match in all valid modes. Also adjusted
the Maze scene and the Flythrough camera to improve usability, and added a
maze-appropriate brick texture. [Thu Nov 20 17:02]
pipeline-v3.zip
: fixed simple bug in
FragmentProcessor.classes (thanks to Tabari Alexander) [Tue Nov 18 17:44]
pipeline-v2.zip
: added TexturedFP
to FragmentProcessor.java
[Tue Nov 18 01:56]
pipeline.zip
: initial release
cs465libs-hw5.zip
: now includes gl4java-glutfonts.jar, which
needs to be on the class path alongside gl4java.jar for this assignment [Tue Nov 18 17:41]
Q: Why is the package gl4java.utils.glut.fonts
missing?
A: Because it is in a different .jar file and we forgot to
distribute it. Download the new support libraries package above and all will
be well.
Q:Where do I find the lighting parameters required to
compute the Blinn-Phong model?
A:They are in the Pipeline object that is referenced by
the pipe
fields in TriangleProcessor
and FragmentProcessor
. The parameters are
Vector3f lightDir
: the direction toward the light (in eye
space coordinates)
float ambientIntensity
: the ambient intensity
float lightIntensity = 1.0f
: the light source intensity
Color3f specularColor
: the material's specular color
float specularExponent
: the material's specular exponent
The methods TriangleProcessor.updateLightModel
and FragmentProcessor.updateLightModel
are called to let you know
when those parameters might have changed, so it's safe to make precomputations
in those methods and use the results in the triangle
and fragment
methods.
Q:What about the 1/r^2 falloff in light intensity?
A:Sorry, this should have been made clearer: we don't expect
you to have any falloff from the light source. Since it's a directional
source, there is no distance to the light source. Just leave out the 1/r^2
term entirely. For instance, the diffuse term is just (light intensity) *
(surface color) * (cos theta).
Q:For texturing, what's the relationship between the texture
coordinates that get handed to TriangleProcessor.triangle
and
the coordinates that are accepted by Texture.sample
?
A:They are one and the same. Both are coordinates that run
from (0,0) at the lower left corner of the texture to (1,1) at the upper right
corner. Note that you don't need to worry about checking bounds,
because Texture.sample
does: it clamps to the nearest pixel on
the boundary of the texture if the coordinates are out of range (but I don't
believe the test scenes will ever provide out-of-range texture coordinates).
Q:I'm confused about where the different coefficients in the
lighting model come from. The handout says that the diffuse and ambient
colors are the same, and the specular color is a constant.
A:The handout is using "diffuse color", "ambient color", and
"specular color" in the same sense they were used in the code of the ray
tracing assignment. The slides call them "coefficients" rather than "colors".
The diffuse and ambient colors are both set by the vertex color or texture, so
that the first two components of the lighting model are (vertex color) *
(ambient intensity) + (vertex color) * (light intensity) * (cos theta). The
specular color ("coefficient" in the slides) is constant in the sense that it
does not vary from one vertex to the next.
Q:How do I compute the half vector when I don't know the
light position?
A:You don't need to know the light position; only the
direction to the light, which is exactly what's stored
in Pipeline.vertex
. Note that in eye space both the eye and the
light vectors are constant (under the infinite viewer approximation we're
asking you to use), which is convenient and efficient.
Q:What's with all the transformations? What do I need them for?
A:There are three transformations stored in Pipeline, which
means there are four coordinate systems around. The application provides the
vertices and normals in object coordinates, and when you transform them by the
modelView matrix they are in eye coordinates. In the slides this was
presented as two different transformations: a modeling transformation to get
to world space, followed by a viewing transformation to get to eye space. The
projection matrix transforms from eye space to clip space, and the viewport
matrix goes from there to screen space (as described in the lecture slides),
which is what you need to give to the rasterizer. You actually don't need to
use clip coordinates—the only intermediate space that you need is eye
space, where you should do your lighting calculations.
As an example, ConstColorTP
concatenates all three matrices into
a single transformation it calls m
. Then a single matrix-vector
multiplication is required to transform each vertex all the way from object
space to screen space. For the triangle processors that do shading, you'll
need to be more careful, since you need to do the lighting computations in
eye space.
Cornell CS465 Fall 2003 (cs465-staff@cs.cornell.edu)