Faster Acceleration Noise for Multibody Animations using Precomputed Soundbanks

ACM/Eurographics Symposium on Computer Animation 2012

Jeffrey N. Chadwick, Changxi Zheng, and Doug L. James

We introduce an efficient method for synthesizing rigid-body acceleration noise for complex multibody scenes. Existing acceleration noise synthesis methods for animation require object-specific precomputation, which is prohibitively expensive for scenes involving rigid-body fracture or other sources of small, procedurally generated debris. We avoid precomputation by introducing a proxy-based method for acceleration noise synthesis in which precomputed acceleration noise data is only generated for a small set of ellipsoidal proxies and stored in a proxy soundbank. Our proxy model is shown to be effective at approximating acceleration noise from scenes with lots of small debris (e.g., pieces produced by rigid-body fracture). This approach is not suitable for synthesizing acceleration noise from larger objects with complicated non-convex geometry; however, it has been shown in previous work that acceleration noise from objects such as these tends to be largely masked by modal vibration sound. We manage the cost of our proxy soundbank with a new wavelet-based compression scheme for acceleration noise and use our model to significantly improve sound synthesis results for several multibody animations.

Paper | Citation | Video | Source Code | Presentation Slides | Acknowledgements

Citation:

Jeffrey N. Chadwick, Changxi Zheng and Doug L. James, Faster Acceleration Noise for Multibody Animations using Precomputed Soundbanks, ACM/Eurographics Symposium on Computer Animation, July, 2012 (TeX)

Source Code:

Coming soon

SCA 2012 Presentation:

Keynote slides (zipped, 98MB)

Acknowledgements:

The National Science Foundation (HCC-0905506)
The Natural Science and Engineering Research Council of Canada
The Alfred P. Sloan Foundation
The John Simon Guggenheim Memorial Foundation
Intel (ISTC-VC)
Pixar
Autodesk
Vision Research

This research was conducted in conjunction with the Intel Science and Technology Center for Visual Computing.