Modeling and Rendering Fabrics at Micron-Resolution

Modular Flux Transfer: Efficient Rendering of High-Resolution Volumes with Repeated Structures

Shuang Zhao1, Miloš Hašan2, Ravi Ramamoorthi3, and Kavita Bala1
1Cornell University, 2Autodesk, 3Universify of California, Berkeley

ACM Transactions on Graphics (SIGGRAPH 2013), 32(4), July 2013

Download: [ Paper PDF (15 MB) ]

mft_teaser    mft_teaser_zoom

Abstract: The highest fidelity images to date of complex materials like cloth use extremely high-resolution volumetric models. However, rendering such complex volumetric media is expensive, with brute-force path tracing often the only viable solution. Fortunately, common volumetric materials (fabrics, finished wood, synthesized solid textures) are structured, with repeated patterns approximated by tiling a small number of exemplar blocks. In this paper, we introduce a precomputation-based rendering approach for such volumetric media with repeated structures based on a modular transfer formulation. We model each exemplar block as a voxel grid and precompute voxel-to-voxel, patch-to-patch, and patch-to-voxel flux transfer matrices. At render time, when blocks are tiled to produce a high-resolution volume, we accurately compute low-order scattering, with modular flux transfer used to approximate higher-order scattering. We achieve speedups of up to 12X over path tracing on extremely complex volumes, with minimal loss of quality. In addition, we demonstrate that our approach outperforms photon mapping on these materials.

Video:

Media Coverage:

Links:

Structure-aware Synthesis for Predictive Woven Fabric Appearance

Shuang Zhao1, Wenzel Jakob1, Steve Marschner1, and Kavita Bala1
1Cornell University

ACM Transactions on Graphics (SIGGRAPH 2012), 31(4), July 2012

Download: [ Paper PDF (18 MB) ]

ctcloth12-teaser

Abstract: Woven fabrics have a wide range of appearance determined by their small-scale 3D structure. Accurately modeling this structural detail can produce highly realistic renderings of fabrics and is critical for predictive rendering of fabric appearance. But building these yarn-level volumetric models is challenging. Procedural techniques are manually intensive, and fail to capture the naturally arising irregularities which contribute significantly to the overall appearance of cloth. Techniques that acquire the detailed 3D structure of real fabric samples are constrained only to model the scanned samples and cannot represent different fabric designs.

This paper presents a new approach to creating volumetric models of woven cloth, which starts with user-specified fabric designs and produces models that correctly capture the yarn-level structural details of cloth. We create a small database of volumetric exemplars by scanning fabric samples with simple weave structures. To build an output model, our method synthesizes a new volume by copying data from the exemplars at each yarn crossing to match a weave pattern that specifies the desired output structure. Our results demonstrate that our approach generalizes well to complex designs and can produce highly realistic results at both large and small scales.

Video:

Links:

TOG Cover Image:

teaser

Building Volumetric Appearance Models of Fabric using Micro CT Imaging

Shuang Zhao1, Wenzel Jakob1, Steve Marschner1, and Kavita Bala1
1Cornell University

ACM Transactions on Graphics (SIGGRAPH 2011), 30(4), July 2011

Download: [ Paper PDF (18 MB) ]

ctcloth11-teaser

Abstract: The appearance of complex, thick materials like textiles is determined by their 3D structure, and they are incompletely described by surface reflection models alone. While volume scattering can produce highly realistic images of such materials, creating the required volume density models is difficult. Procedural approaches require significant programmer effort and intuition to design specialpurpose algorithms for each material. Further, the resulting models lack the visual complexity of real materials with their naturallyarising irregularities.

This paper proposes a new approach to acquiring volume models, based on density data from X-ray computed tomography (CT) scans and appearance data from photographs under uncontrolled illumination. To model a material, a CT scan is made, resulting in a scalar density volume. This 3D data is processed to extract orientation information and remove noise. The resulting density and orientation fields are used in an appearance matching procedure to define scattering properties in the volume that, when rendered, produce images with texture statistics that match the photographs. As our results show, this approach can easily produce volume appearance models with extreme detail, and at larger scales the distinctive textures and highlights of a range of very different fabrics like satin and velvet emerge automatically -- all based simply on having accurate mesoscale geometry.

Video:

Media Coverage:

Links:

Page maintained by Shuang Zhao
Last update: April 2013
Valid HTML 4.01 Transitional