CS4620 (5620) Introduction to Computer Graphics
CS4621 (5621) Computer Graphics Practicum
Cornell University
MWF 1:25pm, Gates G01
M 12:20pm, Gates G01 [4621 only]
Instructor: Steve Marschner (office hours: Mon 2:30–3:30, Weds 4:30–5:30)
Staff
Graduate TAs
Ruojin Cai (Head TA)
Joy Zhang
Xi Deng
Ugrad TAs
Seth Brunell
Sitian Chen
Lily Lin
Zhiqiu Lin
Nayanathara Palanivel
Olivia Xiang
Schedule
date | topic | reading | assignments | |
---|---|---|---|---|
30 | Aug | Introduction slides | ||
2 | Sep | —Labor Day— | ||
4 | Sep | Images and Displays slides | Chap. 3 | |
6 | Sep | Sampling and reconstruction slides | Sections 9.1–9.4 | |
9 | Sep | Image resampling | ||
11 | Sep | Images and Displays cont. | ||
13 | Sep | Triangle meshes | .obj files demo meshes slides | Filters pa due | |
16 | Sep | Triangle meshes cont. | ||
18 | Sep | Ray tracing intersection slides | Chap. 4 | |
20 | Sep | Ray tracing intersection | Mesh hw due | |
23 | Sep | Perspective slides | ||
25 | Sep | Ray tracing shading slides | Chap. 4, Sec. 2.7 | |
27 | Sep | Ray tracing interpolation slides | Sec 11.1–2 except 11.2.4 | Mesh pa due |
30 | Sep | Texture mapping slides | Chap. 11 | |
2 | Oct | Ray Tracing acceleration slides | Sec 12.3 | |
4 | Oct | Transformations slides | Chap. 6, Review 5.1–5.3 | Ray part 1 due |
7 | Oct | Transformations | ||
9 | Oct | Viewing: Orthographic demo slides | Sec. 7.0–7.1 | |
11 | Oct | Viewing: Perspective | Sec. 7.2–7.5 | Ray part 2 due |
14 | Oct | —Fall Break— | ||
16 | Oct | Rasterization demo slides | Sec. 8–8.1, 12.2 | |
18 | Oct | Graphics Pipeline demos slides | Chap. 8 | |
21 | Oct | OpenGL and GLSL exhibits slides | Chap. 17 | |
23 | Oct | Games with textures slides | Chap. 11 | |
25 | Oct | Guest lecture | Manip due | |
28 | Oct | Midterm review | ||
29 | Oct | Midterm: 7:30pm, Gates G01 | ||
30 | Oct | Games with textures | ||
1 | Nov | Antialiasing slides | Chap. 3, Sec. 11.4 | |
4 | Nov | Compositing slides | ||
6 | Nov | Splines slides | Sec. 15.0–15.3 | |
8 | Nov | Splines | Sec. 15.4–15.6 | Shaders pa due |
11 | Nov | Splines | ||
13 | Nov | Subdivision surfaces slides | ||
15 | Nov | Triangle meshes 2 slides | Sec. 12.1 | Splines hw due |
18 | Nov | Hierarchies and scene graphs slides | ||
20 | Nov | Animation slides | Sec. 16.0–2 | |
22 | Nov | Animation notes | Sec. 16.3–7 (just for interest) | Splines pa due |
25 | Nov | Color science slides | Chap. 19 | |
27 | Nov | —Thanksgiving Break— | ||
29 | Nov | —Thanksgiving Break— | ||
2 | Dec | —snow day— | ||
4 | Dec | Color science | ||
6 | Dec | Final exam review | Animation pa due | |
9 | Dec | Wrap-up slides |
Projects
There will be 7 projects and 6 homeworks during the semester.
The CS4620 late policy is described in one of the first lectures. Briefly, you have 7 slip days that you can use to turn assignments in late: turn one assignment in 7 days late or turn 7 assignments in one day late. After you run out of slip days there are late penalties, and assignments can't ever be turned in more than 7 days late.
Do this project alone or in groups of two, as you prefer.
Due: Friday 13 September 2019 (11:59pm)
Introduction
When displaying images, the pixels of the image don't usually line up with the pixels of the display. We need to be able to display images at different sizes and positions, and as we will see as we learn more graphics, in many applications it's also important to be able to rotate, distort, and shift images with sub-pixel accuracy.
The general problem of moving an image from one pixel grid to another is called resampling. The core idea is to find the location of each of the pixels of the destination image in the pixel grid of the source image, then interpolate to get a value at that location. Interpolation is needed because the destination pixels are not going to fall exactly at the locations of the source pixels, so we need a way to make up pixel values at locations between the source pixels.
This assignment is primarily about implementing the most commonly used methods for interpolating images in the context of resampling. They are
- Nearest-neighbor sampling: This is the simplest (and generally fastest) way to resample images. It is what you would do if it hadn't yet occurred to you that interpolation was necessary: just use the closest pixel in the source image. You will see that it produces some annoying (and familiar) "digital looking" artifacts.
- Linear interpolation: This is the workhorse for image resampling in interactive graphics. It's still simple and fast, but it reduces the artifacts a lot.
- Bicubic filtering: This is the most commonly used method when quality matters. It considerably improves the artifacts of linear interpolation, though it takes more time.
Warmup
As a warmup for this assignment, do the following problem and put the answer into a comment at the top of FilterResampler.java.
In this warm-up exercise, you'll compute the weights corresponding to pixels in an image interpolation operation, using the same parameters as used by the resample method in the programming part.
Suppose we have a grayscale source image that is 8x8 pixels in size and has a 2x2 white (pixel value 1.0) square in the center (covering pixels (3,3) to (4,4)), on a black (pixel value 0) background. The position of the bottom-left source sample point in the source image, src[0,0], is defined as (0,0). We resample the source image to a destination image of size 5x5 using the source rectangle (left, bottom, right, top) = (2.4, 3.2, 4.8, 5.6). The position of bottom-left vertex of the source rectangle is at (2.4, 3.2) in source image coordinate, but the bottom-left destination sample point, dst[0,0], doesn't locate at (2.4, 3.2). Depending on the interpolation filter we used for this operation, we see different results in the output image; some examples are:
- Nearest neighbor: dst[1,1] = 1.0; dst [1,3] = 0
- Linear interpolation: dst[0,3] = 0.0768; dst[1,1] = 1.0; dst[1,2] = 0.6
- B-spline cubic: dst[1,2] = 0.5092
In each of these cases, what are the input pixels that contributed to the output pixel, and what are their weights? You can entirely omit pixels that make zero contribution to the result (and their weights). It's helpful to first figure out pixel postion of destination image. Please write the result in the following format. Suppose output pixel [2,3] landed at (3.5, 2.5) in the source image and you are using linear interpolation (this is not where pixel [2,3] actually falls!). Then dst[2,3] would have the value 0.5 and you would write:
dst[2,3] = 0.25 * src[3,3] + 0.25 * src[4,3] = 0.5
In this example source pixels [3,2] and [4,2] would also contribute with weight 0.25, but their values are zero so they are omitted.
Round your results to the nearest ten-thousandth. Note that the output values above provide a check on your weights. Also, the following illustration is to scale and can help you sanity-check your numbers.
Assignment
The assignment framework displays an image in a resizable window, and uses an interface ResampleEngine to compute the image that is displayed in the window. Here is the link to the assignment framework on GitHub. If you use Eclipse, you can select "import projects" and then click "Projects from Folder or Archive". Click "Directory" and choose the folder "filters" to import as Eclipse project. Right click and select "Configure Build Path" to configure Libraries if needed (the provided build path should work on MacOS and Windows). For the MacOS platform, you need to create a run configuration with the required "-XstartOnFirstThread" VM argument, otherwise the application will immediately crash.
The single method in the interface has the signature:
void resample(SimpleImage src, SimpleImage dst, double left, double bottom, double right, double top);This method computes the destination image dst using image content that comes from a rectangle in the source image src, maintaining the existing size of the destination image. The source rectangle extends from left to right and bottom to top and is measured in units of input pixels. For instance, setting the source and destination sizes both to width \(n\) and height \(m\) and the rectangle to \((-0.5, n - 0.5, -0.5, m - 0.5)\) produces an unshifted, unscaled (though possibly slightly blurred, depending on the resampling algorithm) copy of the source image.
There are several implementations of this interface, but the only one that is complete does not actually implement this specification—it doesn't resample at all but rather copies pixels directly from the source to the destination, ignoring the source rectangle. Your job is to complete the other three implementations of this interface:
- NearestNeighborResampler. To get the value for a destination pixel, you just compute the coordinates of the sample point in the source image, round to the nearest integer, and read a single pixel.
- LinearResampler. To get the value for a destination pixel, compute a weighted average of the four source-image pixels that surround the sample point, using bilinear interpolation weights.
- FilterResampler. This class supports resampling using any filter; the framework provides linear and cubic filters but FilterResampler just interacts with them through the Filter interface. Computing a destination pixel involves visiting all the source pixels that fall within the support of the filter, weighting each according to the filter's value for the offset between the sample point and the pixel position. Since this implementation supports arbitrary sized filters, this filtering method can handle minification (downsampling) well: when the destination pixel spacing is smaller than 1 source pixel, the filter should be used at its natural radius, and when the destination pixel spacing is larger, the radius should be set to its radius times the destination pixel spacing.
These resampling algorithms are all discussed in greater detail in the lecture slides and in Chapter 9 of the textbook. At the boundaries of the image, clamp the row and column indices of any pixel accesses that fall outside the image to the edges of the image so that the color of the last pixel extends past the edge. The result looks like this when the source rectangle extends outside the bounds of the source image.
The SimpleImage class is just what it says: a dead-simple image class that has just a width, a height, and an array of bytes that holds the pixel values. Images always have three channels, and the pixels are organized from left to right and bottom to top. For instance, a small image with width 3 and height 2 would have data.length == 18; the red, green, and blue components of the lower-left pixel at data[0], data[1], and data[2]; the upper-left pixel in data[9] ... data[11]; and the last pixel is the upper-right pixel at data[15] ... data[17]. In order to access the pixel data you will need to compute indices into this array; you can compute them directly in your resampling code, but things may turn out more readable if you make use of the provided method stubs for SimpleImage.getPixel and SimpleImage.setPixel.
Please notice that pixel value data[i] has the Java type byte, which is awkward because the pixel value is unsigned but Java always treats integer types as signed, so if you don't do anything special, pixel values above 127 will be interpreted as negative numbers. The way to deal with this is with an expression like data[i] & 0xff, which has the effect of transferring the 8 bits of the byte into an int, where they represent the correct value in the range [0, 255]. In converting your result back to byte, a simple cast will do the job, but you need to watch out for values that might be negative or greater than 255; you should clamp these to 0 and 255.
Framework
You can complete this assignment keeping completely within the abstraction boundary provided by the ResampleEngine interface. But you may like to look around the code to see how things work. The framework is an fairly minimal example of the use of the OpenGL 3 API, which it accesses from Java using the Lightweight Java GL (LWJGL) 3 library, which provides a very direct translation of the C API as a collection of static methods in Java. For windowing it uses the GLFW library provided through LWJGL. Since GLFW is quite low-level, the ResampleApp is built on the base class cs4620.util.SimpleOpenGLApp, which provides a simple and reasonably familiar interface in terms of and init (called once at startup) and draw (called once per frame) methods, plus a collection of simple event handlers for window resizing, keyboard input, and mouse/trackpad input.
Some of the functionality you fill find in ResampleApp:
- It reads some images from disk with hard-coded filenames. These are easy to change if you'd like to load other images.
- It provides a simple pan-and-zoom UI that controls the source rectangle; the destination image size is controlled by resizing the window.
- Each frame, it calls the resampler with the source image and a destination image that matches the current window dimensions, and writes the destination image to an OpenGL texture of that same size. Then it draws that texture pixel-for-pixel into the window.
- It provides keyboard controls to change which resampler class is in use:
- B: B-spline
- C: Catmull-Rom
- M: Mitchell-Netravali
- L: Linear with direct implementation (using LinearResampler)
- F: Linear with filter interpolation (using FilterResampler with LinearFilter)
- N: Nearest neighbor
- P: PixelCopier (default)
- Notice that PixelCopier is a trivial version of "resampling" that just copies the bottom left pixels. It doesn't support up/downsample.
Expected results
You can resize the window to exactly match the image with the R key, save an image with the W key, and close window with the Esc key. To help you determine whether your implementation is exactly correct, the framework supports setting the view to a fixed rectangle and writing an image to disk by pressing the T (Test) key. An image is saved with current filter type in name, and you can compare the result with the following images, which can also be found at data/cactus_reference_images.
To save a test image with certain filter, you need to press the key corresponding to the filter (B, C, F, L, M, N), and then immediately press key T. When using key R, it will change the filter to default one (PixelCopier), and pressing T yields a different view from our reference images.
Please hand in on CMS, with a Zip file of the "filters" directory. Please add a Readme file about your system environment, description about how to run the code, partner's name and netID.






Do this project alone or in groups of two, as you prefer.
A. Written Assignment
Due: Friday, September 20 2019 (11:59pm)
File: Mesh(pdf)
Please use the visualization tool for verifying your answers.
Points: 25
Submission: a PDF file upload
B. Programming Assignment
Due: Friday, September 27 2019 (11:59pm)
See below
Submission: a ZIP file upload
1. Introduction
In this assignment you will learn about the most widely used way to represent surfaces for graphics: triangle meshes. A triangle mesh is just a collection of triangles in 3D, but the key thing that makes it a mesh rather than just a bag of triangles is that the triangles are connected to one another to form a seamless surface. The textbook and lecture slides discuss the data structures used for storing and manipulating triangle meshes, and in this assignment we will work with the simplest kind of structure: an indexed triangle mesh.
Your job in this assignment is to write a simple mesh generation and processing utility that is capable of building triangle meshes to approximate some simple curved surfaces, and also can add some information to existing meshes. It reads and writes meshes stored in the popular OBJ file format, a text format with one line for each vertex and face in the mesh.
Suppose we wish to generate a mesh for a loaded die as shown below:

The cube extends from -1 to 1 in each of the x, y, and z directions. Each face has the following texture applied:

We could represent this mesh by listing out each triangle. Each triangle would be specified by 3 points in 3D space (i.e., the locations of each of its vertices), 3 points in 2D space (i.e., the uv-space texture coordinates of each of its vertices), and 3 unit-length 3D vectors (i.e., the normals of the surface at each vertex). This is sufficient to represent this mesh, but you may notice that we are repeating some information; for instance, the position of each corner of the cube is shared by at least 3 different triangles. Additionally, each triangle represents a flat surface, and thus the normals at each of its vertices are all equivalent. (Later, we will use triangles to approximate curved surfaces where this is not the case.) We can reduce this repetition by introducing an indexing scheme.
1.1 OBJ File Format:
The OBJ file format is one such indexing scheme, and is the one we'll be using in this assignment. In this format, the positions of all vertices are listed first, and then triangles are specified by providing 3 integers that index into this list of positions. Texture coordinates and normals are abstracted similarly; thus, each triangle is specified by 3 position indices, 3 texture coordinate indices, and 3 normal indices.
Vertex Position: A vertex position is specified on a single line by the letter "v" followed by 3 floating point
numbers that represent the x, y, and z coordinates of the point.
Texture Coordinate: A texture coordinate is specified by the letters "vt" followed by 2 floating point numbers
that represent the u and v coordinates respectively.
Vertex Normal: A vertex normal is specified by the letters "vn" followed by 3 floating point numbers
that represent the normal vector. (The OBJ file format does not require that these be
unit length, but we will require it for this assignment.)
Triangle Faces: Triangles are specified with the letter "f" followed by 3 groups of indices.
Groups of indices can be a single integer (indexing the vertex position), two integers
separated by a "/" (indexing the vertex position and texture coordinate respectively),
two integers separated by "//" (indexing the vertex position and vertex normal respectively),
and three integers each separated with a "/" (indexing the position, texture coordinates, and
normals). Please note all indices in the OBJ file format are 1-based!
Vertices should be specified in counter-clockwise order, assuming you are looking down
at the outer surface of the triangle (i.e., the triangle's normal is pointed towards you).
Given all of this information, we can specify the loaded die above with the following OBJ file:
v 1.0 -1.0 -1.0 v 1.0 -1.0 1.0 v -1.0 -1.0 1.0 v -1.0 -1.0 -1.0 v 1.0 1.0 -1.0 v 1.0 1.0 1.0 v -1.0 1.0 1.0 v -1.0 1.0 -1.0 vt 1.0 1.0 vt 0.0 1.0 vt 0.0 0.0 vt 1.0 0.0 vn 1.0 0.0 0.0 vn -1.0 0.0 0.0 vn 0.0 1.0 0.0 vn 0.0 -1.0 0.0 vn 0.0 0.0 1.0 vn 0.0 0.0 -1.0 f 1/1/4 2/2/4 3/3/4 f 1/1/4 3/3/4 4/4/4 f 1/4/1 5/1/1 6/2/1 f 1/4/1 6/2/1 2/3/1 f 2/4/5 6/1/5 7/2/5 f 2/4/5 7/2/5 3/3/5 f 3/2/2 7/3/2 8/4/2 f 3/2/2 8/4/2 4/1/2 f 4/2/6 8/3/6 5/4/6 f 4/2/6 5/4/6 1/1/6 f 5/1/3 8/2/3 7/3/3 f 5/1/3 7/3/3 6/4/3
Note that even though there are 12 total triangles, there are only 8 unique locations for each triangle vertex, so there is no point in listing them multiple times. Similarly, each vertex has only 1 of 4 total texture coordinates, and each vertex has 1 of 6 total normals. This scheme allows us to eliminate the redundancy of listing these values once for each triangle. Also note that many vertices can share a position, but each can have different texture coordinates and normals; for instance, each corner of the mesh is shared by at least 3 triangles, but each may face a different direction and therefore may have its own normal.
2. Help Class
For this assignment, we've provide a small library to help you build your utility. It contains data structures for dealing with vectors and meshes stored in the OBJ format.
The math
package contains two classes, Vector2
and Vector3
, for
dealing with 2D and 3D vectors respectively. They also contain many methods such as vector
addition and subtraction, dot and cross products, and component-wise operators. If you need
to modify a vector in some way, chances are there is already a method that will do what you
want—there is rarely a need to extract the individual components of a vector.
Please be aware that these classes contain in-place methods, even if they return a vector! This means that it is easy to make a mistake like the following:
Vector3 a = new Vector3(...), b = new Vector3(...); Vector3 c = a.add(b); // Don't do this!In the snippet above, the
add()
method modifies a and assigns it to c. This
is almost certainly not what you want. Instead, consider the following code:
Vector3 c = a.clone().add(b);This creates a copy of a and modifies it instead, leaving the original a unchanged. This may seem awkward, but it allows for very concise code when performing many successive operations on vectors:
x.add(y).cross(z).normalize();When in doubt, please consult the Javadocs. They're there for your benefit!
The meshgen
package contains data structures for operating on OBJ-style meshes.
They are very similar in structure to the OBJ file itself; vertex positions are stored as Vector3
s
in an ArrayList
, and similarly for vertex texture coordinates and vertex normals. Faces of
the mesh are represented via the OBJFace
class, which defines a polygon on the mesh
by listing integers for each vertex which index into the position, texture coordinate, and normal arrays.
This structure allows you to reuse repeated positions, texture coordinates, and normals, which
will come in handy when generating the geometries mentioned above.
The OBJMesh
class provides methods for reading and writing OBJ files; thus, to create a mesh, you simply
create a new instance of the OBJMesh
class, fill in the appropriate class members, and call writeOBJ()
.
The class supports both 0- and 1-based indexing, allowing you to either conform to the Java
standard or OBJ standard as you set the mesh data. You can switch between indexing schemes via the indexBase
static variable, which by default is 0. Either way, meshes are automatically converted to 1-based indexing before
being written to file.
Some useful methods include the setVertex()
method in OBJFace
, which allows you to set the
indices for the position, texture coordinate, and normal for a particular vertex simultaneously. Also of note
are OBJMesh
's getPosition()
, getUV()
, and getNormal()
methods, which
retrieve the appropriate vectors from the mesh data, taking the indexing issues mentioned above into account.
4. Programming Tasks
To get the basecode, if you already have the course framework repository on your machine, do git pull
in the folder "frameworks_cs4620fa19/".
If you don't have the basecode on your machine, you can download it by git clone:
git clone https://github.com/smarschner/frameworks_cs4620fa19Once you have the basecode downloaded, you can find the functions to be implemented in
MeshGen.java
.
For this homework, you are going to filled in the code to finish a command line tool MeshGen, it has two main functions:
1. Building a mesh (sphere and cylinders).
2. Generate normals for user-input meshes.
Task 1: Building a mesh (40pt)
Given required arguments, this tool will generate a cylinder/sphere and store the mesh in an OBJ file. To run the tool with arguments, you can either run following command in terminal:java MeshGen -g <sphere|cylinder> [-n <divisionsU>] [-m <divisionsV>] -o <outfile.obj>or put arguments directly in the Run Configurations:

For this usage, the first required input argument is the geometry specifier, and the second is the output filename. If the geometry specifier is one of the fixed strings sphere or cylinder, a triangle mesh is generated that approximates that shape, where the number of triangles generated is controlled by the optional -n and -m options (details below), and written to the output OBJ file.
A triangle mesh that is an approximation of a smooth surface should have
normal vectors stored at the vertices that indicate the direction normal to the
exact surface at that point.
When generating the predefined shapes, you
generate points that are on the surface, and for each you should also calculate
the normal vector that is perpendicular to the surface at that point.
Additionally, the generated meshes will have texture coordinates. Note that you should take advantage
of the indexed storage scheme where possible; for full credit, you should make sure that
if two triangles meet at a position in space, that position should only be specified once
in the resulting OBJ file. Similarly, if two vertices have both the same location and texture
coordinates, those texture coordinates should only be specified once; if they share the same
location and normal, then the normal should only be specified once. (Duplicate normals and texture coordinates
are allowed, but only as long as they are used at different places on the mesh.)
In this task you will implement functions OBJMesh cylinder(int divisionsU)
and
OBJMesh sphere(int divisionsU, int divisionsV)
.
Those two functions take divisionsU
and divisionsV
as input, compute the vertices, uvs, vertex normals, and faces, and
store them in an OBJMesh
object.
Here is an example of generating a triangle mesh in an OBJMesh object.
// Put three vertices into to the mesh outputMesh.positions.add((new Vector3(0.0f, 1.0f, 0.0f))); // 0 outputMesh.positions.add((new Vector3(-1.0f, 0.0f, 0.0f))); // 1 outputMesh.positions.add((new Vector3(1.0f, 0.0f, 0.0f))); // 2 // Put three uvs into the mesh outputMesh.uvs.add((new Vector2(0.0f, 0.0f))); // 0 outputMesh.uvs.add((new Vector2(1.0f, 0.0f))); // 1 outputMesh.uvs.add((new Vector2(0.5f, 0.5f))); // 2 // Put a normal into the mesh outputMesh.normals.add((new Vector3(0,0,1))); // 0 // initialize an OBJFace object, which represents a face in the mesh OBJFace triangle = new OBJFace(3, true, true); // (number of vertices for this face, hasUV, hasNormal) // set the vertices for this triangle in counterclockwise order triangle.setVertex(0, 0, 0, 0); // (index of vertex in this triangle face, index of the vertex position, index of uv, index of normal) triangle.setVertex(1, 1, 1, 0); triangle.setVertex(2, 2, 2, 0); // Put this triangle into the mesh outputMesh.faces.add(triangle);When the method
OBJMesh sphere(int divisionsU, int divisionsV)
is correctly implemented,
you will get an OBJ file of a sphere (16x32) by running MeshGen.java with arguments " -g sphere -n 16 -m 32 -o sphere.obj".
To help you debug your implementation, we also provide a visualization tool;
you can view the generated mesh by dragging the OBJ file to the window, and can select an image to apply a texture to it. (see below under Testing your implementation for details of this tool)
An example result is shown below.

Cylinder
The cylinder has radius 1 and height 2 and is centered at the origin; its longitudinal axis is aligned with the y-axis. It is tessellated with n divisions arranged radially around the outer surface. The two ends of the cylinder are closed by disc-shaped caps parallel to the xz-plane.
The vertices around the rims of the cylinder share 2 or more normals and texture coordinates, though they share the same positions. Each cap consists of n vertices arranged in a circle as well as a single point where the cap intersects the y-axis. This point is incorporated into each triangle that makes up the cap.
Along the cylinder's shell (i.e., excluding its caps), texture coordinates in the u dimension run from 0 to 1 in a counterclockwise direction as viewed from the +y direction. There is a texture seam (where u=0 meets u=1) along vertices that have a z-coordinate of −1. Multiple texture coordinates occur at the same position in space to allow this discontinuity. Coordinates run from 0 to 0.5 in the v dimension, increasing in the +y direction. The texture coordinates for the two caps are circles inscribed in the upper-left (for the −y cap) and upper-right (for the +y cap) quadrants of the unit square in the uv-plane, with the +u direction corresponding to the +x direction in 3D space, and the +v direction corresponding to the −z direction for the top cap, and the +z direction for the bottom cap. The -m flag is ignored for the cylinder.
Sphere
The sphere has radius 1 and is centered at the origin in 3D coordinates. It is tessellated in latitude-longitude fashion, with n divisions around the equator and m divisions from pole to pole along each line of longitude. Note that m divisions requires m+1 vertices from pole to pole. The North pole is at (0,1,0), the South pole at (0,−1,0), and points on the Greenwich meridian have coordinates (0,y,z) with z>0.
The mesh is generated with vertex normals that are normal to the exact sphere, and with texture coordinates (u,v) where u depends only on longitude, with u=0 at longitude 180 degrees West and u=1 at 180 degrees East, and where v depends only on latitude, with v=0 at the South Pole and v=1 at the North pole. Each quadrilateral formed by two adjacent longitude lines and two adjacent latitude lines is divided on the diagonal to form two triangles.
The texture coordinates along the180th meridian are duplicated: one texture coordinate has u=0 and the other has u=1, to enable the discontinuity required for correct wrapping of a image texture across the seam. The pole has n different texture coordinates, to enable nearly-appropriate texture in the row of triangles adjacent to the pole. Every other triangle around each pole becomes degenerate, i.e., collapses into a line; these degenerate triangles should be omitted.


The default value of n should be 32, and the default value of m should be 16. You may assume that both arguments will always be greater than 2.
If you have your sphere and cylinder correctly implemented, you could attach texture to your mesh in our view tool. Here is an example result for cylinder and sphere.

Task 2: Computing vertex normals (35pt)
For this usage, the user provides an input OBJ mesh file, which the program reads in. The mesh is assumed to have no normals (if normals are included in the input file, they are ignored). The program then generates approximate normals at each vertex as described below, and writes the resulting mesh to the user-provided output file.
To run the function with arguments, you can either run following command in terminal:java MeshGen -i <infile.obj> -o <outfile.obj>or put arguments directly in the Run Configurations:

Since the original surface which the mesh approximates is forgotten (if there even was one), we need some way to make up plausible normals. There are a number of ways to do this, and we'll use a simple one for this assignment: the normal at a vertex is the average of the geometric normals of the triangles that share this vertex.
Your first thought might be to do this as a loop over vertices, with an inner loop over the triangles that share that vertex:
for each vertex v normal[v] = (0,0,0) for each triangle t around v normal[v] += normal of triangle normal[v].normalize()
With the appropriate data structures, this is possible, but in our case there's no efficient way to do the inner loop: our data structure tells us what vertices belong to a triangle, but the only way to find triangles that belong to a vertex is to search through the whole list of triangles. This is possible but would be quadratic in the mesh size, which is bad news for large meshes.
However, it's simple to do it with the loops interchanged:
for each vertex v normal[v] = (0,0,0) for each triangle t for each vertex v around t normal[v] += normal of triangle t for each vertex v normal[v].normalize()
This way the inner loop can efficiently visit just the necessary vertices. Nifty!
5. Extra credits (each one up to 10pt)
There are many opportunities to expand your implementation and receive extra credit. Some ideas are listed below.
A torus is a doughnut-shaped surface defined by a major radius, affecting the size of the hole, and a minor radius, affecting the thickness of the ring. Add torus to the list of geometries generated by your program. Your code should create a torus with major radius 1 and minor radius r (controlled by an additional -r flag with a default of 0.25). Its u coordinates are like the sphere, and the v coordinate runs from 0 to 1 around the inside of the torus, with the direction arranged so that the texture is right-reading from the outside (i.e., the texture is not flipped when mapped to the surface). Like the sphere, it has a seam on the −z half of the yz-plane, and it has a similar seam around the inner surface of the doughnut hole; vertices along each seam share a pair of texture coordinates, and single vertex, at the position (0,0,r−1) where the seams meet, shares 4 texture coordinates.
Specs illustration for the torus (for the case of n = 32, m = 16, and r = 0.25)- You may have noticed that for the sphere, the triangles get compressed near the poles. Provide an alternative tessellation approach to create a more even mesh; see icosphere as an example. (Note that this feature should be provided in addition to the sphere specification described above.)
- The compare() method provided in the OBJMesh class compares each face in one mesh with each face in the other until it finds a match. This naive, brute force algorithm is very slow when given large meshes as input. Provide an alternative method with better performance. (Can you reduce the complexity from O(n2) to O(nlogn)?)
- Per the OBJ specification, the OBJFace class supports faces that are arbitrary polygons, not just triangles. However, in many cases, triangles are much nicer to work with. Extend your MeshGen utility to generate a triangle mesh from an input mesh with polygonal faces. That is, take all faces that have 4 or more vertices and split them into triangles.
- As mentioned above, the OBJ data format allows the reuse of positions, vertex coordinates, and normals through indexing. However, this reuse is not enforced. Extend your MeshGen utility to take in an uncompressed OBJ file and remove duplicate positions, texture coordinates, and normals, adjusting vertex indices as necessary.
- Many algorithms that operate on triangle meshes assume that the mesh is well-behaved; that is, it has some
structure that is more predictable than just a soup of arbitrary triangles. One class of well-behaved mesh
is known as a manifold mesh. Manifold meshes satisfy the following properties:
- Every edge in the mesh borders either 1 or 2 triangles.
- Every vertex in the mesh belongs to a connected set of triangles; that is, if you consider all triangles that share one particular vertex, you can draw a curve from any triangle to any other triangle without leaving the surface of the mesh, and without crossing over the vertex. "Bowtie" configurations are not allowed.
Extensions
There are many opportunities to expand your implementation. Some ideas are listed below.
If you implement any of the above options, please provide sample input and/or output with your submission that demonstrates that your code is functioning correctly.
6. Testing your program
Since your program just writes a file full of inscrutable numbers, you need some way to look at the results. We have provided a mesh visualization tool that works in your browser, available here. To view a mesh, click "Select OBJ File" (or just drag an OBJ file onto the window). If the mesh has texture coordinates, you can upload an image texture by clicking "Select Texture File" (or, again, just drag the image file onto the window). The viewer optionally shows the wireframe structure and vertex normals of the mesh. Coordinate axes may also be displayed; +x is red, +y is green, and +z is blue. If triangle faces are wound in the wrong direction, they appear yellow and will not be textured.
There are a number of additional programs to visualize meshes, for example Blender, ObjViewer, MeshLab, or p3d.in. Be careful, though! Some of these programs add normals to your meshes if they don't already exist, or if they're malformed.
In addition, the OBJMesh class contains the isValid() method to check that your mesh conforms to the OBJ standard, and the compare() method to check if two different meshes are equivalent (ignoring the order in which faces are defined). Thus, another way to verify your code is correct is to compare your output to reference meshes, which are included in the data directory. Reference meshes are given for the cylinder and sphere with default arguments. To receive full credit, your output must be equivalent to ours, and generate no warnings when used with the verbose option. (We will have other reference meshes to test your code against in addition to the ones provided.) In addition, we have provided two meshes without normals (a bunny and a horse), as well as examples of what the meshes should look like when normals have been estimated. Be patient with the horse example, as it contains many triangles and may take a while to process.
Handing in
For ease of grading, please handin a ZIP file to CMS. The ZIP file should contain the MeshGen.java and a README file that contains:
- You and your partner's names and NetIDs.
- Explanations of any extensions you implemented and incantations to run them via the command line.
- Anything else you want us to know.
If there is anything else you would like to put in your solution (e.g., meshes you generated for extra credit, or other implementation in OBJFace.java etc), please place add it in the ZIP file and submit.
Exams
There will be an evening midterm and a final exam:
- Midterm: October 29, 7:30pm, Gates G01 Review Session
- Final: (not yet scheduled)
The exams are closed book, but you're allowed to bring one letter-sized piece of paper with writing on both sides, to avoid the need to memorize things.
Old Exams
Here are links to exams from previous versions of this class. They can be useful for studying, because they give an indication of the style and scope of problems that may be expected. Note that they are not a useful reference for topics to expect, because the content of the course has changed from year to year.
- CS 4620 Fall 2018
- CS 4620 Spring 2018
- CS 4620 Spring 2017
- CS 4620 Fall 2014
- CS 4620 Fall 2013
- CS 4620 Fall 2010
- CS 4620 Fall 2009
- CS 4620 Fall 2008
About CS4620
Questions, help, discussion: The instructors are available to answer questions, advise on projects, or just to discuss interesting topics related to the class at office hours and by appointment as needed. For electronic communication we are using Piazza (handy link at the top of this page).
Practicum: In the optional practicum course, CS4621/5621, you will get a more in-depth exposure to the course material and complete an additional project. Students taking the practicum will also be required to attend lectures most weeks during the Monday meeting time for CS4621/5621.
Academic integrity: We assume the work you hand in is your own, and the results you hand in are generated by your program. You are welcome to read whatever you want to learn what you need to do the work, but we do expect you to build your own implementations of the methods we are studying. If you are ever in doubt, just include a citation in your code or report indicating clearly and specifically where some particular idea or implementation came from, whether it be a classmate, a web site, another piece of software, or anything—this always maintains your honesty, whether the source was used in a good way or not.
The principle is that an assignment is an academic document, like a journal article. When you turn it in, you are claiming that everything in it is your original idea (or is original to you and your partner, if you are handing in as a pair) unless you cite a source for it.
School can be stressful, and your coursework and other factors can put you under a lot of pressure, but that is never a reason for dishonesty. If you feel you can't complete the work on your own, come talk to the professor or the TAs, or your advisor, and we can help you figure out what to do. Think before you hand in!
Clear-cut cases of dishonesty will result in failing the course.
For more information see Cornell's Code of Academic Integrity.
Collaboration: You are welcome (encouraged, even) to discuss projects among yourselves in general terms. But when it comes to writing up the homeworks or implementing the projects, you need to be working alone (or only with your partner if you are doing a project as a pair). In particular, it is never OK for you to see another student's homework writeup or another team's program code, and certainly never OK to copy parts of one person's or team's writeup, code, or results into another's, even if the general solution was worked out together.
Books
Textbook:
![]() |
Marschner & Shirley,
Fundamentals of Computer Graphicsfourth edition (Available online in Cornell library two ways: 1 2) |
Supplemental Books and Materials:
![]() |
Gortler,
Foundations of 3D Computer Graphicsfirst edition |
- OpenGL "Red Book" --- *the* reference for OpenGL programming
- 3-D computer graphics: a mathematical introduction with OpenGL, Volume 385, By Samuel R. Buss
- Andrew S. Glassner, An Introduction to ray tracing,
1989