Understanding, Acquiring and Rendering Translucent Appearance

High-Order Similarity Relations in Radiative Transfer

Shuang Zhao1, Ravi Ramamoorthi2, and Kavita Bala1
1Cornell University, 2University of California, Berkeley
 
ACM Transactions on Graphics (SIGGRAPH 2014), 33(4), July 2014
teaser

Abstract

Radiative transfer equations (RTEs) with different scattering parameters can lead to identical solution radiance fields. Similarity theory studies this effect by introducing a hierarchy of equivalence relations called "similarity relations". Unfortunately, given a set of scattering parameters, it remains unclear how to find altered ones satisfying these relations, significantly limiting the theory's practical value. This paper presents a complete exposition of similarity theory, which provides fundamental insights into the structure of the RTE's parameter space. To utilize the theory in its general high-order form, we introduce a new approach to solve for the altered parameters including the absorption and scattering coefficients as well as a fully tabulated phase function. We demonstrate the practical utility of our work using two applications: forward and inverse rendering of translucent media. Forward rendering is our main application, and we develop an algorithm exploiting similarity relations to offer "free" speedups for Monte Carlo rendering of optically dense and forward-scattering materials. For inverse rendering, we propose a proof-of-concept approach which warps the parameter space and greatly improves the efficiency of gradient descent algorithms. We believe similarity theory is important for simulating and acquiring volume-based appearance, and our approach has the potential to benefit a wide range of future applications in this area.

SIGGRAPH Paper

paper

Links

Acknowledgments

We are grateful to Ioannis Gkioulekas, Steve Marschner, and Bruce Walter for their insightful suggestions. We also thank the anonymous reviewers for their helpful comments. Funding for this work was provided by NSF IIS grants 1011832, 1011919, 1161645, and the Intel Science and Technology Center for Visual Computing.

Inverse Volume Rendering with Material Dictionaries

Ioannis Gkioulekas1, Shuang Zhao2, Kavita Bala2, Todd Zickler1, and Anat Levin3
1Harvard School of Engineering and Applied Sciences, 2Cornell University, 3Weizmann Institute of Science
 
ACM Transactions on Graphics (SIGGRAPH Asia 2013), 32(6), November 2013
teaser

Abstract

Translucent materials are ubiquitous, and simulating their appearance requires accurate physical parameters. However, physically-accurate parameters for scattering materials are difficult to acquire. We introduce an optimization framework for measuring bulk scattering properties of homogeneous materials (phase function, scattering coefficient, and absorption coefficient) that is more accurate, and more applicable to a broad range of materials. The optimization combines stochastic gradient descent with Monte Carlo rendering and a material dictionary to invert the radiative transfer equation. It offers several advantages: (1) it does not require isolating singlescattering events; (2) it allows measuring solids and liquids that are hard to dilute; (3) it returns parameters in physically-meaningful units; and (4) it does not restrict the shape of the phase function using Henyey-Greenstein or any other low-parameter model. We evaluate our approach by creating an acquisition setup that collects images of a material slab under narrow-beam RGB illumination. We validate results by measuring prescribed nano-dispersions and showing that recovered parameters match those predicted by Lorenz-Mie theory. We also provide a table of RGB scattering parameters for some common liquids and solids, which are validated by simulating color images in novel geometric configurations that match the corresponding photographs with less than 5% error.

SIGGRAPH Asia Paper

paper

Links

Acknowledgments

We thank Henry Sarkas at Nanophase for donating material samples and calibration data. Funding by the National Science Foundation (IIS 1161564, 1012454, 1212928, and 1011919), the European Research Council, the Binational Science Foundation, Intel ICRI-CI, and Amazon Web Services in Education grant awards. Much work was performed while T. Zickler was a Feinberg Foundation Visiting Faculty Program Fellow at the Weizmann Institute.

Understanding the Role of Phase Function in Translucent Appearance

Ioannis Gkioulekas1, Bei Xiao2, Shuang Zhao3, Edward Adelson2, Todd Zickler1, and Kavita Bala3
1Harvard School of Engineering and Applied Sciences, 2Massachusetts Institute of Technology, 3Cornell University
 
ACM Transactions on Graphics, 32(5), September 2013
teaser

Abstract

Multiple scattering contributes critically to the characteristic translucent appearance of food, liquids, skin, and crystals; but little is known about how it is perceived by human observers. This paper explores the perception of translucency by studying the image effects of variations in one factor of multiple scattering: the phase function. We consider an expanded space of phase functions created by linear combinations of Henyey-Greenstein and von Mises-Fisher lobes, and we study this physical parameter space using computational data analysis and psychophysics.

Our study identifies a two-dimensional embedding of the physical scattering parameters in a perceptually-meaningful appearance space. Through our analysis of this space, we find uniform parameterizations of its two axes by analytical expressions of moments of the phase function, and provide an intuitive characterization of the visual effects that can be achieved at different parts of it. We show that our expansion of the space of phase functions enlarges the range of achievable translucent appearance compared to traditional single-parameter phase function models. Our findings highlight the important role phase function can have in controlling translucent appearance, and provide tools for manipulating its effect in material design applications.

TOG Paper

paper

Links

Media Coverage

Acknowledgments

I.G. and T.Z were funded by the National Science Foundation through awards IIS-1161564, IIS-1012454, and IIS-1212928, and by AmazonWeb Services in Education grant awards. B.X. and E.A. were funded by the National Institutes of Health through awards R01-EY019262-02 and R21-EY019741-02. S.Z. and K.B. were funded by the National Science Foundation through awards IIS-1161645, and IIS-1011919. We thank Bonhams for allowing us to use the photograph of Figure 2, and The Stanford 3D Scanning Repository for providing the Lucy, Dragon, and Buddha models used in Figures 1, 4-8, 12, and 13.

Page maintained by Shuang Zhao and Kavita Bala
Last update: April 2014
Valid HTML 4.01 Transitional