DualSDF represents shapes using two levels of granularity, allowing users to manipulate high resolution shapes (odd rows) with high-level concepts through manipulating a proxy primitive-based shape (even rows). Simple editing operations on individual primitives (colored in blue) are propagated to the other primitives and the fine-grained model in a semantically meaningful manner. Above, we illustrate how an existing shape (inside the red box) can be modified semantically by adjusting the radius of a primitive (fuselage diameter on the airplane) or the distance between two primitives (wheelbase of a car).
Abstract
We are seeing a Cambrian explosion of 3D shape representations for use in machine learning.
Some representations seek high expressive power in capturing high-resolution detail. Other approaches seek to represent shapes as compositions of simple parts, which are intuitive for people to understand and easy to edit and manipulate. However, it is difficult to achieve both fidelity and interpretability in the same representation. We propose DualSDF, a representation expressing shapes at two levels of granularity, one capturing fine details and the other representing an abstracted proxy shape using simple and semantically consistent shape primitives.
To achieve a tight coupling between the two representations, we use a variational objective over a shared latent space.
Our two-level model gives rise to a new shape manipulation technique in which a user can interactively manipulate
the coarse proxy shape and see the changes instantly
mirrored in the high-resolution shape. Moreover, our model actively augments and
guides the manipulation towards producing semantically meaningful shapes, making complex manipulations possible with minimal user input.
@inproceedings{hao2020dualsdf,
title={DualSDF: Semantic Shape Manipulation using a Two-Level Representation},
author={Hao, Zekun and Averbuch-Elor, Hadar and Snavely, Noah and Belongie, Serge},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2020}
}
Acknowledgements
We would like to thank Abe Davis for his insightful feedback. This work was supported in part by grants from Facebook and by the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program.