Realistic Rendering of Human Hair

In this project we are working toward realistic, physically based rendering of whole heads of hair. To do this we must first accurately model the scattering of light by individual fibers, and then use that model as the basis for a simulation of how light reflects from hair to hair through a mass of tens of thousands of fibers.

Scattering simulation

I became interested in hair rendering while talking with Jed Lengyel at Microsoft Research about his work on real-time fur rendering, and at Stanford I discovered that Pat Hanrahan and Henrik Jensen had been thinking along similar lines. There we developed the first version of a model for scattering from individual hair fibers, which we finished after I moved to Cornell:

Stephen R. Marschner, Henrik Wann Jensen, Mike Cammarano, Steve Worley, and Pat Hanrahan. “Light Scattering from Human Hair Fibers.” In proceedings of SIGGRAPH 2003. San Diego, July 2003.

The difference between this model and earlier models for hair scattering is that our model demonstrably resembles the actual scattering from hair fibers as measured by ourselves (using the Stanford Spherical Gantry) and researchers in the cosmetics industry.

A realistic scattering model for hair is only the starting point for realistic rendering of hair, unless the hair is quite dark in color. For light brown to red, blond, or white hair, multiple scattering—light that reflects from more than one fiber on its way from the light to the eye—is what determines the appearance of hair, and even the overall color.

The problem of multiple scattering in hair looks a lot like a volume rendering problem, but because of hair's anisotropic structure it is quite a difficult one, which formerly could only be solved at great computational expense by Monte Carlo path tracing. My student Jon Moon and I (with considerable input from Andrew Butts) developed a method to solve this quickly enough to be practical:

Jonathan T. Moon and Stephen R. Marschner. “Simulating Multiple Scattering in Hair Using a Photon Mapping Approach,” In proceedings of SIGGRAPH 2006. Boston, July 2006.

Later, we developed a new method that follows a similar approach but runs much faster:

Jonathan T. Moon, Bruce Walter, and Steve Marschner. “Efficient Multiple Scattering in Hair Using Spherical Harmonics,” In proceedings of SIGGRAPH 2008. Los Angeles, August 2008.

A great deal remains to be done to fully understand and efficiently simulate multiple scattering in hair.

Other projects

Another project that got its start at Microsoft, where François Sillion and I overlapped briefly, led to a method for capturing hair geometry that was developed mainly at INRIA using data that I captured at Stanford:

Stéphane Grabli, François Sillion, Stephen R. Marschner, and Jerome E. Lengyel. “Image-Based Hair Capture by Inverse Lighting.” In proceedings of Graphics Interface 2002. Calgary, May 2002.

I co-authored a survey article, published in IEEE TVCG, which covered aspects of computer-generated hair, including modeling, simulation, and rendering. I was primarily involved in the sections on scattering models and rendering.

Kelly Ward, Florence Bertails, Tae-Yong Kim, Stephen R. Marschner, Marie-Paule Cani, and Ming C. Lin. “A Survey on Hair Modeling: Styling, Simulation, and Rendering.” IEEE TVCG (to appear). 2006.


Steve Marschner