Image
object, which is an img
tag.src
field of the image object to the file name that you want.var cornellImage = new Image(); cornellImage.onload = function() { runWebGL(cornellImage); }; cornellImage.src = "cornell-logo.jpg";
runWebGL
function takes the image instance and proceed to run WebGL
with it (creating context, setting the rendering loop, and all that). It is important
that the image is loaded before we proceed with WebGL; otherwise, we don't have the data to work
with.// Step 1: Create the texture object. var cornellTexture = gl.createTexture(); // Step 2: Bind the texture object to the "target" TEXTURE_2D gl.bindTexture(gl.TEXTURE_2D, cornellTexture); // Step 3: (Optional) Tell WebGL that pixels are flipped vertically, // so that we don't have to deal with flipping the y-coordinate. gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true); // Step 4: Download the image data to the GPU. gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, cornellImage); // Step 5: (Optional) Create a mipmap so that the texture can be anti-aliased. gl.generateMipmap(gl.TEXTURE_2D); // Step 6: Clean up. Tell WebGL that we are done with the target. gl.bindTexture(gl.TEXTURE_2D, null);
TEXTURE_2D
target tells us that we are dealing with the 2D texture system.
There are other targets such as TEXTURE_CUBE_MAP
, and WebGL 2 has
TEXTURE_3D
.
texImage2D
downloads the texture to GPU.
You'll be using this a lot, so it's better to understand it.
void gl.texImage2D(target, level, internalformat, format, type, HTMLImageElement? pixels);
target
specifies the texture system you want to deal with.
In our case, this is always TEXTURE_2D
level
is the "level of detail" (LOD).
LODs are used to anti-alias rendering results.
In our case, we will always download to the "finest" or "base" level,
which is denoted by 0. So, this parameter is almost always 0
unless you do something fancier.
internalFormat
is the format of the pixel is used
on the GPU. I found that gl.RGBA
works on most cases.
format
and type
specify the form of the data
that come in from Javascript.
format
specifies how many channels a pixel has.
For this one, gl.RGBA
works on most case.
type
specifies the data type used to store each channel.
If you load an image from a JPG or PNG file, it's
gl.UNSIGNED_BYTE
.
pixels
is where you put the image tag you loaded.gl.generateMipmap
creates a mipmap out of the uploaded texture data.
Mipmapping will be covered in CS 4620 next week. So stay tuned.
But, if you must have a non power-of-two (NPOT) texture, WebGL does include limited native support. [...] The catch: these textures cannot be used with mipmapping and they must not "repeat" (tile or wrap).
gl.generateMipmap
on NPOT textures.
sampler2D
in your fragment shader.
uniform sampler2D texture;
texture2D(<sampler2D>, <texCoord>)
function.
<texCoord>
parameter must be a vec2
.vec4
, so it has the 4 RGBA channels.precision highp float; varying vec2 geom_texCoord; uniform sampler2D texture; void main() { gl_FragColor = texture2D(texture, geom_texCoord); }
geom_texCoord
)
comes from the vertex shader:
attribute vec3 vert_position; attribute vec2 vert_texCoord; varying vec2 geom_texCoord; void main() { gl_Position = vec4(vert_position, 1.0); geom_texCoord = vert_texCoord; }
if (gl.getUniformLocation(program, "texture") != null) { // Step 1: Activate a "texture unit" of your choosing. gl.activeTexture(gl.TEXTURE0); // Step 2: Bind the texture you want to use. gl.bindTexture(gl.TEXTURE_2D, cornellTexture); // Step 3: Set the uniform to the "index" of the texture unit you just activated. var textureLocation = gl.getUniformLocation(program, "texture"); gl.uniform1i(textureLocation, 0); }