Reconstructing translucent objects using differentiable rendering


Reconstruction results rendered in global illumination for both synthetic (left three objects) and real data (right three objects). For the synthetic data, we show jointly reconstructing the shape and subsurface scattering material of a bumpy object (first from left); a spatially varying extinction coefficient texture (left bunny) and a spatially varying single scattering reflectance texture (right bunny). On real data, we show reconstruction of a slice of soap and cut cubes of kiwi and dragonfruit.

Abstract


Inverse rendering is a powerful approach to modeling objects from photographs, and we extend previous techniques to handle translucent materials that exhibit subsurface scattering. Representing translucency using a heterogeneous bidirectional scattering-surface reflectance distribution function (BSSRDF), we extend the framework of path-space differentiable rendering to accommodate both surface and subsurface reflection. This introduces new types of paths requiring new methods for sampling moving discontinuities in material space that arise from visibility and moving geometry. We use this differentiable rendering method in an end-to-end approach that jointly recovers heterogeneous translucent materials (represented by a BSSRDF) and detailed geometry of an object (represented by a mesh) from a sparse set of measured 2D images in a coarse-to-fine framework incorporating Laplacian preconditioning for the geometry. To efficiently optimize our models in the presence of the Monte Carlo noise introduced by the BSSRDF integral, we introduce a dual-buffer method for evaluating the L2 image loss. This efficiently avoids potential bias in gradient estimation due to the correlation of estimates for image pixels and their derivatives and enables correct convergence of the optimizer even when using low sample counts in the renderer. We validate our derivatives by comparing against finite differences and demonstrate the effectiveness of our technique by comparing inverse-rendering performance with previous methods. We show superior reconstruction quality on a set of synthetic and real-world translucent objects as compared to previous methods that model only surface reflection.

Download


Video


Cite


@inproceedings{10.1145/3528233.3530714,
			author = {Deng, Xi and Luan, Fujun and Walter, Bruce and Bala, Kavita and Marschner, Steve},
			title = {Reconstructing Translucent Objects Using Differentiable Rendering},
			year = {2022},
			isbn = {9781450393379},
			publisher = {Association for Computing Machinery},
			address = {New York, NY, USA},
			url = {https://doi.org/10.1145/3528233.3530714},
			doi = {10.1145/3528233.3530714},
			abstract = {Inverse rendering is a powerful approach to modeling objects from photographs, and we extend previous techniques to handle translucent materials that exhibit subsurface scattering. Representing translucency using a heterogeneous bidirectional scattering-surface reflectance distribution function (BSSRDF), we extend the framework of path-space differentiable rendering to accommodate both surface and subsurface reflection. This introduces new types of paths requiring new methods for sampling moving discontinuities in material space that arise from visibility and moving geometry. We use this differentiable rendering method in an end-to-end approach that jointly recovers heterogeneous translucent materials (represented by a BSSRDF) and detailed geometry of an object (represented by a mesh) from a sparse set of measured 2D images in a coarse-to-fine framework incorporating Laplacian preconditioning for the geometry. To efficiently optimize our models in the presence of the Monte Carlo noise introduced by the BSSRDF integral, we introduce a dual-buffer method for evaluating the L2 image loss. This efficiently avoids potential bias in gradient estimation due to the correlation of estimates for image pixels and their derivatives and enables correct convergence of the optimizer even when using low sample counts in the renderer. We validate our derivatives by comparing against finite differences and demonstrate the effectiveness of our technique by comparing inverse-rendering performance with previous methods. We show superior reconstruction quality on a set of synthetic and real-world translucent objects as compared to previous methods that model only surface reflection.},
			booktitle = {ACM SIGGRAPH 2022 Conference Proceedings},
			articleno = {38},
			numpages = {10},
			keywords = {ray tracing, subsurface scattering, appearance acquisition, differentiable rendering},
			location = {Vancouver, BC, Canada},
			series = {SIGGRAPH '22}
			}
		 

Copyright © 2022. All rights are reserved.