Research Publications and Projects

Doug L. James
Associate Professor
Department of Computer Science
Cornell University

Research Interests: I am very broadly interested in computer graphics, physically based animation, physically based sound rendering, haptic rendering, and scientific computing.  Our research contributions possess several themes:

Deformation processing: By and large, many of our contributions address efficient deformation processing methods: fast precomputed Green’s function models [JP99, PvdDJ+01, JP03], hardware-accelerated skinning [JP02, JT05], data-driven animation using motion graphs [JF03, JTCW07], dimensional model reduction and nonlinear reduced-order dynamics [BJ05, AKJ08, KJ09], deformable collision processing [JP04, KSJP08, BJ10], vibration-based sound synthesis [JBP06, BDT+08, CAJ09], fast lattice shape matching [RJ07], and extra-warm yarn-level cloth [KJM08, KJM10].

Enabling interactive and multi-sensory applications has been a driving theme for our deformation processing work. Special algorithmic challenges are posed by interactive and low-latency physical simulation. We have tried to enable compelling multi-sensory feedback (graphics, force-feedback haptics, and acoustics) by designing algorithms that are fundamentally different from traditional scientific computing methods.

Physically based sound rendering addresses how to synthesize synchronized and realistic sounds automatically for a wide spectrum of physically based simulation models (rigid bodies [JP06], nonlinear thin-shells [CAJ09], splashing fluids [ZJ09], fracturing solids [ZJ10], self-collision chattering [BJ10], etc.).  A major focus of our current work is developing efficient reduced-order algorithms to synthesize vibrations and sound radiation to enable realistic sound source models for future virtual environments.

Motion control and design: Simulating physics fast is one thing, but getting it to do what you want is another. Controlling simulated objects to design physically plausible motion, especially easily and interactively, is a hard long-standing problem in computer animation. Recently, Chris Twigg and I have explored two new approaches to this problem: Many-Worlds Browsing [TJ07] and reverse-time rigid-body dynamics [TJ08].

Timothy R. Langlois and Doug L. James, Inverse-Foley Animation: Synchronizing rigid-body motions to sound, ACM Transactions on Graphics (SIGGRAPH 2014), 33(4), August 2014.

ABSTRACT: In this paper, we introduce Inverse-Foley Animation, a technique for optimizing rigid-body animations so that contact events are synchronized with input sound events. A precomputed database of randomly sampled rigid-body contact events is used to build a contact-event graph, which can be searched to determine a plausible sequence of contact events synchronized with the input sound's events. To more easily find motions with matching contact times, we allow transitions between simulated contact events using a motion blending formulation based on modified contact impulses. We fine tune synchronization by slightly retiming ballistic motions. Given a sound, our system can synthesize synchronized motions using graphs built with hundreds of thousands of precomputed motions, and millions of contact events. Our system is easy to use, and has been used to plan motions for hundreds of sounds, and dozens of rigid-body models.

Timothy R. Langlois, Steven S. An, Kelvin K. Jin, and Doug L. James, Eigenmode Compression for Modal Sound Models, ACM Transactions on Graphics (SIGGRAPH 2014), 33(4), August 2014.

ABSTRACT: We propose and evaluate a method for significantly compressing modal sound models, thereby making them far more practical for audiovisual applications. The dense eigenmode matrix, needed to compute the sound model's response to contact forces, can consume tens to thousands of megabytes depending on mesh resolution and mode count. Our eigenmode compression pipeline is based on nonlinear optimization of Moving Least Squares (MLS) approximations. Enhanced compression is achieved by exploiting symmetry both within and between eigenmodes, and by adaptively assigning per-mode error levels based on human perception of the far-field pressure amplitudes. Our method provides smooth eigenmode approximations, and efficient random access. We demonstrate that, in many cases, hundredfold compression ratios can be achieved without audible degradation of the rendered sound.
Jeffrey N. Chadwick, Changxi Zheng and Doug L. James, Faster Acceleration Noise for Multibody Animations using Precomputed Soundbanks, ACM/Eurographics Symposium on Computer Animation, July 2012.

ABSTRACT: We introduce an efficient method for synthesizing rigid-body acceleration noise for complex multibody scenes. Existing acceleration noise synthesis methods for animation require object-specific precomputation, which is prohibitively expensive for scenes involving rigid-body fracture or other sources of small, procedurally generated debris. We avoid precomputation by introducing a proxy-based method for acceleration noise synthesis in which precomputed acceleration noise data is only generated for a small set of ellipsoidal proxies and stored in a proxy soundbank. Our proxy model is shown to be effective at approximating acceleration noise from scenes with lots of small debris (e.g., pieces produced by rigid-body fracture). This approach is not suitable for synthesizing acceleration noise from larger objects with complicated non-convex geometry; however, it has been shown in previous work that acceleration noise from objects such as these tends to be largely masked by modal vibration sound. We manage the cost of our proxy soundbank with a new wavelet-based compression scheme for acceleration noise and use our model to significantly improve sound synthesis results for several multibody animations. 
Energy-based Self-Collision Culling for Arbitrary
                Mesh Deformations
Changxi Zheng and Doug L. James, Energy-based Self-Collision Culling for Arbitrary Mesh Deformations, ACM Transactions on Graphics (SIGGRAPH 2012), August 2012.

ABSTRACT:  In this paper, we accelerate self-collision detection (SCD) for a deforming triangle mesh by exploiting the idea that a mesh cannot self collide unless it deforms enough. Unlike prior work on subspace self-collision culling which is restricted to low-rank deformation subspaces, our energy-based approach supports arbitrary mesh deformations while still being fast. Given a bounding volume hierarchy (BVH) for a triangle mesh, we precompute Energy-based Self- Collision Culling (ESCC) certificates on bounding-volume-related sub-meshes which indicate the amount of deformation energy required for it to self collide. After updating energy values at runtime, many bounding-volume self-collision queries can be culled using the ESCC certificates. We propose an affine-frame Laplacian-based energy definition which sports a highly optimized certificate preprocess, and fast runtime energy evaluation. The latter is performed hierarchically to amortize Laplacian energy and affine-frame estimation computations. ESCC supports both discrete and continuous SCD with detailed and nonsmooth geometry. We observe significant culling on many examples, with SCD speed-ups up to 26x.
Precomputed Acceleration Noise for Improved
                Rigid-Body Sound
Jeffrey N. Chadwick, Changxi Zheng and Doug L. James, Precomputed Acceleration Noise for Improved Rigid-Body Sound, ACM Transactions on Graphics (SIGGRAPH 2012), August 2012.

ABSTRACT: We introduce an efficient method for synthesizing acceleration noise--sound produced when an object experiences abrupt rigidbody acceleration due to collisions or other contact events. We approach this in two main steps. First, we estimate continuous contact force profiles from rigid-body impulses using a simple model based on Hertz contact theory. Next, we compute solutions to the acoustic wave equation due to short acceleration pulses in each rigid-body degree of freedom. We introduce an efficient representation for these solutions--Precomputed Acceleration Noise--which allows us to accurately estimate sound due to arbitrary rigid-body accelerations.  We find that the addition of acceleration noise significantly complements the standard modal sound algorithm, especially for small objects.
Motion-driven Concatenative Synthesis of Cloth
Steven S. An, Doug L. James and Steve Marschner, Motion-driven Concatenative Synthesis of Cloth Sounds, ACM Transactions on Graphics (SIGGRAPH 2012), August 2012.

ABSTRACT: We present a practical data-driven method for automatically synthesizing plausible soundtracks for physics-based cloth animations running at graphics rates. Given a cloth animation, we analyze the deformations and use motion events to drive crumpling and friction sound models estimated from cloth measurements. We synthesize a low-quality sound signal, which is then used as a target signal for a concatenative sound synthesis (CSS) process. CSS selects a sequence of microsound units, very short segments, from a database of recorded cloth sounds, which best match the synthesized target sound in a low-dimensional feature-space after applying a handtuned warping function. The selected microsound units are concatenated together to produce the final cloth sound with minimal filtering. Our approach avoids expensive physics-based synthesis of cloth sound, instead relying on cloth recordings and our motiondriven CSS approach for realism. We demonstrate its effectiveness on a variety of cloth animations involving various materials and character motions, including first-person virtual clothing with binaural sound.
Stitch Meshes for Modeling Knitted Clothing with
                Yarn-level Detail
Cem Yuksel, Jonathan M. Kaldor, Doug L. James, and Steve Marschner, Stitch Meshes for Modeling Knitted Clothing with Yarn-level Detail, ACM Transactions on Graphics (SIGGRAPH 2012), August 2012.

ABSTRACT:  Recent yarn-based simulation techniques permit realistic and efficient dynamic simulation of knitted clothing, but producing the required yarn-level models remains a challenge. The lack of practical modeling techniques significantly limits the diversity and complexity of knitted garments that can be simulated. We propose a new modeling technique that builds yarn-level models of complex knitted garments for virtual characters. We start with a polygonal model that represents the large-scale surface of the knitted cloth. Using this mesh as an input, our interactive modeling tool produces a finer mesh representing the layout of stitches in the garment, which we call the stitch mesh. By manipulating this mesh and assigning stitch types to its faces, the user can replicate a variety of complicated knitting patterns. The curve model representing the yarn is generated from the stitch mesh, then the final shape is computed by a yarn-level physical simulation that locally relaxes the yarn into realistic shape while preserving global shape of the garment and avoiding “yarn pull-through,” thereby producing valid yarn geometry suitable for dynamic simulation. Using our system, we can efficiently create yarn-level models of knitted clothing with a rich variety of patterns that would be completely impractical to model using traditional techniques. We show a variety of example knitting patterns and full-scale garments produced using our system.
Fabricating Articulated Characters from Skinned
Moritz Beacher, Bernd Bickel, Doug L. James and Hanspeter Pfister, Fabricating Articulated Characters from Skinned Meshes, ACM Transactions on Graphics (SIGGRAPH 2012), August 2012.

ABSTRACT:  Articulated deformable characters are widespread in computer animation. Unfortunately, we lack methods for their automatic fabrication using modern additive manufacturing (AM) technologies. We propose a method that takes a skinned mesh as input, then estimates a fabricatable single-material model that approximates the 3D kinematics of the corresponding virtual articulated character in a piecewise linear manner. We first extract a set of potential joint locations. From this set, together with optional, user-specified range constraints, we then estimate mechanical friction joints that satisfy inter-joint non-penetration and other fabrication constraints. To avoid brittle joint designs, we place joint centers on an approximate medial axis representation of the input geometry, and maximize each joint’s minimal cross-sectional area. We provide several demonstrations, manufactured as single, assembled pieces using 3D printers. 
Animating Fire with Sound
Jeffrey N. Chadwick and Doug L. James, Animating Fire with Sound, ACM Transactions on Graphics (SIGGRAPH 2011), 30(4), August 2011.

ABSTRACT:  We propose a practical method for synthesizing plausible fire sounds that are synchronized with physically based fire animations.  To enable synthesis of combustion sounds without incurring the cost of time-stepping fluid simulations at audio rates, we decompose our synthesis procedure into two components. First, a low-frequency flame sound is synthesized using a physically based combustion sound model driven with data from a visual flame simulation run at a relatively low temporal sampling rate. Second, we propose two bandwidth extension methods for synthesizing additional high-frequency flame sound content: (1) spectral bandwidth extension which synthesizes higher-frequency noise matching combustion sound spectra from theory and experiment; and (2) data-driven texture synthesis to synthesize high-frequency content based on input flame sound recordings. Various examples and comparisons are presented demonstrating plausible flame sounds, from small candle flames to large flame jets.

Toward High-Quality Modal Contact Sound Changxi Zheng and Doug L. James, Toward High-Quality Modal Contact Sound, ACM Transactions on Graphics (SIGGRAPH 2011), 30(4), August 2011.

ABSTRACT:  Contact sound models based on linear modal analysis are commonly used with rigid body dynamics. Unfortunately, treating vibrating objects as "rigid" during collision and contact processing fundamentally limits the range of sounds that can be computed, and contact solvers for rigid body animation can be ill-suited for modal contact sound synthesis, producing various sound artifacts. In this paper, we resolve modal vibrations in both collision and frictional contact processing stages, thereby enabling non-rigid sound phenomena such as micro-collisions, vibrational energy exchange, and chattering. We propose a frictional multibody contact formulation and modified Staggered Projections solver which is well-suited to sound rendering and avoids noise artifacts associated with spatial and temporal contact-force fluctuations which plague prior methods.  To enable practical animation and sound synthesis of numerous bodies with many coupled modes, we propose a novel asynchronous integrator with mode-level adaptivity built into the frictional contact solver. Vibrational contact damping is modeled to approximate contact-dependent sound dissipation. Results are provided that demonstrate high-quality contact resolution with sound.

Theodore Kim and Doug L. James, Physics-based Character Skinning using Multi-Domain Subspace Deformations, In ACM SIGGRAPH / Eurographics Symposium on Computer Animation, August 2011.  (Best paper award)

ABSTRACT:  We propose a domain-decomposition method to simulate articulated deformable characters entirely within a subspace framework. The method supports quasistatic and dynamic deformations, nonlinear kinematics and materials, and can achieve interactive time-stepping rates. To avoid artificial rigidity, or "locking," associated with coupling low-rank domain models together with hard constraints, we employ penalty-based coupling forces. The multi-domain subspace integrator can simulate deformations efficiently, and exploits efficient subspace-only evaluation of constraint forces between rotated domains using the so-called Fast Sandwich Transform (FST). Examples are presented for articulated characters with quasistatic and dynamic deformations, and interactive performance with hundreds of fully coupled modes. Using our method, we have observed speedups of between three and four orders of magnitude over full-rank, unreduced simulations.

Rigid-Body Fracture Sound
Changxi Zheng and Doug L. James, Rigid-Body Fracture Sound with Precomputed Soundbanks, ACM Transactions on Graphics (SIGGRAPH 2010), 29(3), July 2010, pp. 69:1-69:13.

ABSTRACT:  We propose a physically based algorithm for synthesizing sounds synchronized with brittle fracture animations. Motivated by laboratory experiments, we approximate brittle fracture sounds using time-varying rigid-body sound models. We extend methods for fracturing rigid materials by proposing a fast quasistatic stress solver to resolve near-audio-rate fracture events, energy-based fracture pattern modeling and estimation of “crack”-related fracture impulses.  Multipole radiation models provide scalable sound radiation for complex debris and level of detail control. To reduce soundmodel generation costs for complex fracture debris, we propose Precomputed Rigid-Body Soundbanks comprised of precomputed ellipsoidal sound proxies. Examples and experiments are presented that demonstrate plausible and affordable brittle fracture sounds.
                Self-Collision Processing (SSCC) Jernej Barbic and Doug L. James, Subspace Self-Collision Culling, ACM Transactions on Graphics (SIGGRAPH 2010), 29(3), July 2010, pp. 81:1-81:9.

ABSTRACT:  We show how to greatly accelerate self-collision detection (SCD) for reduced deformable models. Given a triangle mesh and a set of deformation modes, our method precomputes Subspace Self-Collision Culling (SSCC) certificates which, if satisfied, prove the absence of self-collisions for large parts of the model. At runtime, bounding volume hierarchies augmented with our certificates can aggressively cull overlap tests and reduce hierarchy updates. Our method supports both discrete and continuous SCD, can handle complex geometry, and makes no assumptions about geometric smoothness or normal bounds. It is particularly effective for simulations with modest subspace deformations, where it can often verify the absence of self-collisions in constant time. Our certificates enable low amortized costs, in time and across many objects in multi-body dynamics simulations. Finally, SSCC is effective enough to support self-collision tests at audio rates, which we demonstrate by producing the first sound simulations of clattering objects.

Jonathan Kaldor, Doug L. James and Steve Marschner, Efficient Yarn-based Cloth with Adaptive Contact Linearization, ACM Transactions on Graphics (SIGGRAPH 2010), 29(3), July 2010, pp. 205:1-105:10.

ABSTRACT:  Yarn-based cloth simulation can improve visual quality but at high computational costs due to the reliance on numerous persistent yarn-yarn contacts to generate material behavior. Finding so many contacts in densely interlinked geometry is a pathological case for traditional collision detection, and the sheer number of contact interactions makes contact processing the simulation bottleneck. In this paper, we propose a method for approximating penalty-based contact forces in yarn-yarn collisions by computing the exact contact response at one time step, then using a rotated linear force model to approximate forces in nearby deformed configurations.  Because contacts internal to the cloth exhibit good temporal coherence, sufficient accuracy can be obtained with infrequent updates to the approximation, which are done adaptively in space and time.  Furthermore, by tracking contact models we reduce the time to detect new contacts. The end result is a 7- to 9-fold speedup in contact processing and a 4- to 5-fold overall speedup, enabling simulation of character-scale garments.

Jeffrey Chadwick, Steven An, and Doug L. James, Harmonic Shells: A Practical Nonlinear Sound Model for Near-Rigid Thin Shells, ACM Transactions on Graphics (SIGGRAPH ASIA Conference Proceedings), 28(5), December 2009, pp. 119:1-119:10.

ABSTRACT:  We propose a procedural method for synthesizing realistic sounds due to nonlinear thin-shell vibrations. We use linear modal analysis to generate a small-deformation displacement basis, then couple the modes together using nonlinear thin-shell forces. To enable audio-rate time-stepping of mode amplitudes with mesh-independent cost, we propose a reduced-order dynamics model based on a thin-shell cubature scheme. Limitations such as mode locking and pitch glide are addressed. To support fast evaluation of mid-frequency mode-based sound radiation for detailed meshes, we propose far-field acoustic transfer maps (FFAT maps) which can be precomputed using state-of-the-art fast Helmholtz multipole methods. Familiar examples are presented including rumbling trash cans and plastic bottles, crashing cymbals, and noisy sheet metal objects, each with increased richness over linear modal sound models.

Theodore Kim and Doug L. James, Skipping Steps in Deformable Simulation with Online Model Reduction, ACM Transactions on Graphics (SIGGRAPH ASIA Conference Proceedings), 28(5), December 2009, pp. 123:1-123:9.

ABSTRACT:  Finite element simulations of nonlinear deformable models are computationally costly, routinely taking hours or days to compute the motion of detailed meshes. Dimensional model reduction can make simulations orders of magnitude faster, but is unsuitable for general deformable body simulations because it requires expensive precomputations, and it can suppress motion that lies outside the span of a pre-specified low-rank basis. We present an online model reduction method that does not have these limitations. In lieu of precomputation, we analyze the motion of the full model as the simulation progresses, incrementally building a reduced-order nonlinear model, and detecting when our reduced model is capable of performing the next timestep. For these subspace steps, full-model computation is “skipped” and replaced with a very fast (on the order of milliseconds) reduced order step. We present algorithms for both dynamic and quasistatic simulations, and a “throttle” parameter that allows a user to trade off between faster, approximate previews and slower, more conservative results. For detailed meshes undergoing low-rank motion, we have observed speedups of over an order of magnitude with our method.

Changxi Zheng and Doug L. James, Harmonic Fluids, ACM Transaction on Graphics (SIGGRAPH 2009), 28(3), August 2009, pp. 37:1-37:12.

ABSTRACT:  Fluid sounds, such as splashing and pouring, are ubiquitous and familiar but we lack physically based algorithms to synthesize them in computer animation or interactive virtual environments. We propose a practical method for automatic procedural synthesis of synchronized harmonic bubble-based sounds from 3D fluid animations. To avoid audio-rate time-stepping of compressible fluids, we acoustically augment existing incompressible fluid solvers with particle-based models for bubble creation, vibration, advection, and radiation. Sound radiation from harmonic fluid vibrations is modeled using a time-varying linear superposition of bubble oscillators. We weight each oscillator by its bubble-to-ear acoustic transfer function, which is modeled as a discrete Green's function of the Helmholtz equation. To solve potentially millions of 3D Helmholtz problems, we propose a fast dual-domain multipole boundary-integral solver, with cost linear in the complexity of the fluid domain's boundary. Enhancements are proposed for robust evaluation, noise elimination, acceleration, and parallelization. Examples of harmonic fluid sounds are provided for water drops, pouring, babbling, and splashing phenomena, often with thousands of acoustic bubbles, and hundreds of thousands of transfer function solves.

Steven An, Theodore Kim and Doug L. James, Optimizing Cubature for Efficient Integration of Subspace Deformations, ACM Transactions on Graphics (SIGGRAPH ASIA Conference Proceedings), 27(5), December 2008, pp. 165:1-165:10.

ABSTRACT:  We propose an efficient scheme for evaluating nonlinear subspace forces (and Jacobians) associated with subspace deformations. The core problem we address is efficient integration of the subspace force density over the 3D spatial domain. Similar to Gaussian quadrature schemes that efficiently integrate functions that lie in particular polynomial subspaces, we propose cubature schemes (multi-dimensional quadrature) optimized for efficient integration of force densities associated with particular subspace deformations, particular materials, and particular geometric domains. We support generic subspace deformation kinematics, and nonlinear hyperelastic materials. For an r-dimensional deformation subspace with O(r) cubature points, our method is able to evaluate subspace forces at O(r^2) cost. We also describe composite cubature rules for runtime error estimation. Results are provided for various subspace deformation models, several hyperelastic materials (St.Venant-Kirchhoff, Mooney-Rivlin, Arruda-Boyce), and multimodal (graphics, haptics, sound) applications. We show dramatically better efficiency than traditional Monte Carlo integration.

Danny M. Kaufman, Shinjiro Sueda, Doug L. James and Dinesh K. Pai, Staggered Projections for Frictional Contact in Multibody Systems, ACM Transactions on Graphics (SIGGRAPH ASIA Conference Proceedings), 27(5), December 2008, pp. 164:1-164:11.

ABSTRACT:  We present a new discrete, velocity-level formulation of frictional contact dynamics that reduces to a pair of coupled projections, and introduce a simple fixed-point property of the projections. This allows us to construct a novel algorithm for accurate frictional contact resolution based on a simple staggered sequence of projections.  The algorithm accelerates performance using warm starts to leverage the potentially high temporal coherence between contact states and provides users with direct control over frictional accuracy. Applying this algorithm to rigid and deformable systems, we obtain robust and accurate simulations of frictional contact behavior not previously possible, at rates suitable for interactive haptic simulations, as well as large-scale animations. By construction, the proposed algorithm guarantees exact, velocity-level contact constraint enforcement and obtains long-term stable and robust integration.  Examples are given to illustrate the performance, plausibility and accuracy of the obtained solutions.

Yarn-level knitted cloth

Jonathan Kaldor, Doug L. James and Steve Marschner, Simulating Knitted Cloth at the Yarn Level, ACM Transactions on Graphics (SIGGRAPH Conference Proceedings), 27(3), August 2008, pp. 65:1-65:9.

ABSTRACT:  Knitted fabric is widely used in clothing because of its unique and stretchy behavior, which is fundamentally different from the behavior of woven cloth. The properties of knits come from the nonlinear, three-dimensional kinematics of long, inter-looping yarns, and despite significant advances in cloth animation we still do not know how to simulate knitted fabric faithfully. Existing cloth simulators mainly adopt elastic-sheet mechanical models inspired by woven materials, focusing less on the model itself than on important simulation challenges such as efficiency, stability, and robustness. We define a new computational model for knits in terms of the motion of yarns, rather than the motion of a sheet. Each yarn is modeled as an inextensible, yet otherwise flexible, B-spline tube. To simulate complex knitted garments, we propose an implicit-explicit integrator, with yarn inextensibility constraints imposed using efficient projections. Friction among yarns is approximated using rigid-body velocity filters, and key yarn-yarn interactions are mediated by stiff penalty forces. Our results show that this simple model predicts the key mechanical properties of different knits, as demonstrated by qualitative comparisons to observed deformations of actual samples in the laboratory, and that the simulator can scale up to substantial animations with complex dynamic motion.

Time reversal spelled
              backwards Christopher D. Twigg and Doug L. James, Backward Steps in Rigid Body Simulation, ACM Transactions on Graphics (SIGGRAPH Conference Proceedings), 27(3), August 2008, pp. 25:1-25:10.

ABSTRACT:  Physically based simulation of rigid body dynamics is commonly done by time-stepping systems forward in time. In this paper, we propose methods to allow time-stepping rigid body systems backward in time. Unfortunately, reverse-time integration of rigid bodies involving frictional contact is mathematically ill-posed, and can lack unique solutions. We instead propose time-reversed rigid body integrators that can sample possible solutions when unique ones do not exist. We also discuss challenges related to dissipation-related energy gain, sensitivity to initial conditions, stacking, constraints and articulation, rolling, sliding, skidding, bouncing, high angular velocities, rapid velocity growth from micro-collisions, and other problems encountered when going against the usual flow of time.

Wavelet turbulence Theodore Kim, Nils Thuerey, Doug L. James and Markus Gross, Wavelet Turbulence for Fluid Simulation, ACM Transactions on Graphics (SIGGRAPH Conference Proceedings), 27(3), August 2008, pp. 50:1-50:6.

ABSTRACT:  We present a novel wavelet method for the simulation of fluids at high spatial resolution. The algorithm enables large- and small-scale detail to be edited separately, allowing high-resolution detail to be added as a post-processing step. Instead of solving the Navier-Stokes equations over a highly refined mesh, we use the wavelet decomposition of a low-resolution simulation to determine the location and energy characteristics of missing high-frequency components. We then synthesize these missing components using a novel incompressible turbulence function, and provide a method to maintain the temporal coherence of the resulting structures. There is no linear system to solve, so the method parallelizes trivially and requires only a few auxiliary arrays. The method guarantees that the new frequencies will not interfere with existing frequencies, allowing animators to set up a low resolution simulation quickly and later add details without changing the overall fluid motion.

                  synthesis of simple modal sounds

Nicolas Bonneel, George Drettakis, Nicolas Tsingos, Isabelle Viaud-Delmon and Doug L. James, Fast Modal Sounds with Scalable Frequency-Domain Synthesis, ACM Transactions on Graphics (SIGGRAPH Conference Proceedings), 27(3), August 2008, pp. 24:1-24:9.

ABSTRACT:  Audio rendering of impact sounds, such as those caused by falling objects or explosion debris, adds realism to interactive 3D audio-visual applications, and can be convincingly achieved using modal sound synthesis. Unfortunately, mode-based computations can become prohibitively expensive when many objects, each with many modes, are impacted simultaneously. We introduce a fast sound synthesis approach, based on short-time Fourier Tranforms, that exploits the inherent sparsity of modal sounds in the frequency domain. For our test scenes, this “fast mode summation” can give speedups of 5-8 times compared to a time-domain solution, with slight degradation in quality. We discuss different reconstruction windows, affecting the quality of impact sound “attacks”. Our Fourier-domain processing method allows us to introduce a scalable, real-time, audio processing pipeline for both recorded and modal sounds, with auditory masking and sound source clustering. To avoid abrupt computation peaks, such as during the simultaneous impacts of an explosion, we use crossmodal perception results on audiovisual synchrony to effect temporal scheduling. We also conducted a pilot perceptual user evaluation of our method. Our implementation results show that we can treat complex audiovisual scenes in real time with high quality.

Six-DoF haptic rendering of contact between some
                Boeing 777 flexible hoses

Jernej Barbič and Doug L. James, Six-DoF haptic rendering of contact between geometrically complex reduced deformable models, IEEE Transactions on Haptics, 1(1):39–52, 2008.

ABSTRACT: Real-time evaluation of distributed contact forces between rigid or deformable 3D objects is a key ingredient of 6-DoF force-feedback rendering. Unfortunately, at very high temporal rates, there is often insufficient time to resolve contact between geometrically complex objects. We propose a spatially and temporally adaptive approach to approximate distributed contact forces under hard real-time constraints. Our method is CPU based, and supports contact between rigid or reduced deformable models with complex geometry. We propose a contact model that uses a point-based representation for one object, and a signed-distance field for the other. This model is related to the Voxmap Pointshell Method (VPS), but gives continuous contact forces and torques, enabling stable rendering of stiff penalty-based distributed contacts. We demonstrate that stable haptic interactions can be achieved by point-sampling offset surfaces to input “polygon soup” geometry using particle repulsion. We introduce a multi-resolution nested pointshell construction which permits level-of-detail contact forces, and enables graceful degradation of contact in close-proximity scenarios. Parametrically deformed distance fields are proposed for contact between reduced deformable objects. We present several examples of 6-DoF haptic rendering of geometrically complex rigid and deformable objects in distributed contact at real-time kilohertz rates.

Twenty-First Century Waterfall:  Animating Water Bottle Recycling Rates

This outreach animation was made to raise awareness about the surprisingly poor recycling rates of plastic water bottles.

Jernej Barbič and Doug L. James,  Time-critical distributed contact for 6-DoF haptic rendering of adaptively sampled reduced deformable models, In Proceedings of ACM SIGGRAPH Symposium on Computer Animation (SCA 2007), San Diego, CA, August 2007.  (Best paper award)
ABSTRACT:  Real-time evaluation of distributed contact forces for rigid or deformable 3D objects is important for providing multi-sensory feedback in emerging real-time applications, such as 6-DoF haptic force-feedback rendering. Unfortunately, at very high temporal rates (1 kHz for haptics), there is often insufficient time to resolve distributed contact between geometrically complex objects.
    In this paper, we present a spatially and temporally adaptive sample-based approach to approximate contact forces under hard real-time constraints. The approach is CPU based, and supports both rigid and reduced deformable models with complex geometry. Penalty-based contact forces are efficiently resolved using a multi-resolution point-based representation for one object, and a signed-distance oracle for the other. Hard real-time approximation of distributed contact forces uses multi-level progressive point-contact sampling, and exploits temporal coherence, graceful degradation and other optimizations. We present several examples of 6-DoF haptic rendering of geometrically complex rigid or deformable objects in distributed contact at real-time kilohertz rates.

Christopher D. Twigg and Doug L. James,  Many-Worlds Browsing for Control of Multibody Dynamics, ACM Transactions on Graphics (Proc. SIGGRAPH 2007), 26(3), July 2007, pp. 14:1-14:8.
ABSTRACT:  Animation techniques for controlling passive simulation are commonly based on an optimization paradigm: the user provides goals a priori, and sophisticated numerical methods minimize a cost function that represents these goals. Unfortunately, for multibody systems with discontinuous contact events these optimization problems can be highly nontrivial to solve, and many-hour offline optimizations, unintuitive parameters, and convergence failures can frustrate end-users and limit usage. On the other hand, users are quite adaptable, and systems which provide interactive feedback via an intuitive interface can leverage the user’s own abilities to quickly produce interesting animations. However, the online computation necessary for interactivity limits scene complexity in practice.
    We introduce Many-Worlds Browsing, a method which circumvents these limits by exploiting the speed of multibody simulators to compute numerous example simulations in parallel (offline and online), and allow the user to browse and modify them interactively. We demonstrate intuitive interfaces through which the user can select among the examples and interactively adjust those parts of the scene that don’t match his requirements. We show that using a combination of our techniques, unusual and interesting results can be generated for moderately sized scenes with under an hour of user time. Scalability is demonstrated by sampling much larger scenes using modest offline computations.

Alec R. Rivers and Doug L. James,  FastLSM: Fast Lattice Shape Matching for Robust Real-Time Deformation, ACM Transactions on Graphics (Proc. SIGGRAPH 2007), 26(3), July 2007, pp. 82:1-82:6.
ABSTRACT:  We introduce a simple technique that enables robust approximation of volumetric, large-deformation dynamics for real-time or large-scale offline simulations. We propose Lattice Shape Matching, an extension of deformable shape matching to regular lattices with embedded geometry; lattice vertices are smoothed by convolution of rigid shape matching operators on local lattice regions, with the effective mechanical stiffness specified by the amount of smoothing via region width. Since the naive method can be very slow for stiff models--per-vertex costs scale cubically with region width--we provide a fast summation algorithm, Fast Lattice Shape Matching (FastLSM), that exploits the inherent summation redundancy of shape matching and can provide large-region matching at constant per-vertex cost. With this approach, large lattices can be simulated in linear time. We present several examples and benchmarks of an efficient CPU implementation, including many dozens of soft bodies simulated at real-time rates on a typical desktop machine.

Doug L. James, Christopher D. Twigg, Andrew Cove and Robert Y. Wang, Mesh Ensemble Motion Graphs: Data-driven Mesh Animation with Constraints, ACM Transactions on Graphics, 26(4), October 2007, pp. 17:1-17:16. 
ABSTRACT:  We describe a technique for using space-time cuts to smoothly transition between stochastic mesh animation clips involving numerous deformable mesh groups while subject to physical constraints. These transitions are used to construct Mesh Ensemble Motion Graphs for interactive data-driven animation of high-dimensional mesh animation datasets, such as those arising from expensive physical simulations of deformable objects blowing in the wind. We formulate the transition computation as an integer programming problem, and introduce a novel randomized algorithm to compute transitions subject to geometric noninterpenetration constraints.
Doug L. James, Jernej Barbić and Dinesh K. Pai, Precomputed Acoustic Transfer: Output-sensitive, accurate sound generation for geometrically complex vibration sources, ACM Transactions on Graphics, 25(3), pp. 987-995, July 2006, pp. 987-995.
ABSTRACT:  Simulating sounds produced by realistic vibrating objects is challenging because sound radiation involves complex diffraction and interreflection effects that are very perceptible and important. These wave phenomena are well understood, but have been largely ignored in computer graphics due to the high cost and complexity of computing them at audio rates. We describe a new algorithm for real-time synthesis of realistic sound radiation from rigid objects. We start by precomputing the linear vibration modes of an object, and then relate each mode to its sound pressure field, or acoustic transfer function, using standard methods from numerical acoustics. Each transfer function is then approximated to a specified accuracy using low-order multipole sources placed near the object. We provide a low-memory, multilevel, randomized algorithm for optimized source placement that is suitable for complex geometries. At runtime, we can simulate new interaction sounds by quickly summing contributions from each modes equivalent multipole sources. We can efficiently simulate global effects such as interreflection and changes in sound due to listener location. The simulation costs can be dynamically traded-off for sound quality. We present several examples of sound generation from physically based animations.
Doug L. James and Christopher D. Twigg, Skinning Mesh Animations, ACM Transactions on Graphics (ACM SIGGRAPH 2005), 24(3), pp. 399-407, August 2005, pp. 399-407.
ABSTRACT:  We extend approaches for skinning characters to the general setting of skinning deformable mesh animations. We provide an automatic algorithm for generating progressive skinning approximations, that is particularly efficient for pseudo-articulated motions. Our contributions include the use of nonparametric mean shift clustering of high-dimensional mesh rotation sequences to automatically identify statistically relevant bones, and robust least squares methods to determine bone transformations, bone-vertex influence sets, and vertex weight values. We use a low-rank data reduction model defined in the undeformed mesh configuration to provide progressive convergence with a fixed number of bones. We show that the resulting skinned animations enable efficient hardware rendering, rest pose editing, and deformable collision detection. Finally, we present numerous examples where skins were automatically generated using a single set of parameter values.
Jernej Barbič and Doug L. James, Real-Time Subspace Integration of St.Venant-Kirchhoff Deformable Models, ACM Transactions on Graphics (ACM SIGGRAPH 2005), 24(3), pp. 982-990, August 2005, pp. 982-990.
ABSTRACT:  In this paper, we present an approach for fast subspace integration of reduced-coordinate nonlinear deformable models that is suitable for interactive applications in computer graphics and haptics. Our approach exploits dimensional model reduction to build reduced-coordinate deformable models for objects with complex geometry.  We exploit the fact that model reduction on large deformation models with linear materials (as commonly used in graphics) result in internal force models that are simply cubic polynomials in reduced coordinates. Coefficients of these polynomials can be precomputed, for efficient runtime evaluation. This allows simulation of nonlinear dynamics using fast implicit Newmark subspace integrators, with subspace integration costs independent of geometric complexity. We present two useful approaches for generating low-dimensional subspace bases: modal derivatives and an interactive sketch. Mass-scaled principal component analysis (mass-PCA) is suggested for dimensionality reduction. Finally, several examples are given from computer animation to illustrate high performance, including force-feedback haptic rendering of a complicated object undergoing large deformations.
Doug L. James and Dinesh K. Pai, BD-Tree: Output-Sensitive Collision Detection for Reduced Deformable Models, ACM Transactions on Graphics (ACM SIGGRAPH 2004), 23(3), pp. 393-398, August 2004, pp. 393-398. 
ABSTRACT:  We introduce the Bounded Deformation Tree, or BD-Tree, which can perform collision detection with reduced deformable models at costs comparable to collision detection with rigid objects. Reduced deformable models represent complex deformations as linear superpositions of arbitrary displacement fields, and are used in a variety of applications of interactive computer graphics. The BD-Tree is a bounding sphere hierarchy for output-sensitive collision detection with such models. Its bounding spheres can be updated after deformation in any order, and at a cost independent of the geometric complexity of the model; in fact the cost can be as low as one multiplication and addition per tested sphere, and at most linear in the number of reduced deformation coordinates. We show that the BD-Tree is also extremely simple to implement, and performs well in practice for a variety of real-time and complex off-line deformable simulation examples.   

"Niagara" sequence (12,201 chairs;  218,568,714 triangles;  level 8 collision depth):
  • VIDEO (avi [DivX], 512x384, 66MB, FULL 1m10s clip)
  • VIDEO (avi [DivX], 512x384, 4.4MB, MINI 5sec clip)
  • VIDEO (avi [DivX], 1024x768, 70MB, FULL 1m10s clip)
  • VIDEO (mov, 1024x768, 110MB, FULL 1m10s clip)

Doug L. James, Jernej Barbic, and Christopher D. Twigg,Squashing Cubes: Automating Deformable Model Construction for Graphics, In Proceedings of the SIGGRAPH 2004 Conference on Sketches & Applications. ACM Press, August 2004
  Doug L. James and Kayvon Fatahalian,Precomputing Interactive Dynamic Deformable Scenes, ACM Transactions on Graphics (ACM SIGGRAPH 2003), 22(3), pp. 879-887, 2003. 
ABSTRACT:  We present an approach for precomputing data-driven models of interactive physically based deformable scenes. The method permits real-time hardware synthesis of nonlinear deformation dynamics, including self-contact and global illumination effects, and supports real-time user interaction. We use data-driven tabulation of the system's deterministic state space dynamics, and model reduction to build efficient low-rank parameterizations of the deformed shapes. To support runtime interaction, we also tabulate impulse response functions for a palette of external excitations. Although our approach simulates particular systems under very particular interaction conditions, it has several advantages. First, parameterizing all possible scene deformations enables us to precompute novel reduced coparameterizations of global scene illumination for low-frequency lighting conditions. Second, because the deformation dynamics are precomputed and parameterized as a whole, collisions are resolved within the scene during precomputation so that runtime self-collision handling is implicit. Optionally, the data-driven models can be synthesized on programmable graphics hardware, leaving only the low-dimensional state space dynamics and appearance data models to be computed by the main CPU.
  • PAPER (pdf,10MB)
  • VIDEO (avi-DivX, 14MB)
  • Related CMU technical report (contains additional images and appendices):
D. James and K. Fatahalian, Precomputing Interactive Dynamic Deformable Scenes, tech. report TR-03-33, Robotics Institute, Carnegie Mellon University, September, 2003.

Kayvon Fatahalian, Real-Time Global Illumination of Deformable Objects, Undergraduate Senior Research Thesis, Carnegie Mellon University, 2003.

Paul G. Kry, Doug L. James and Dinesh K. Pai, EigenSkin: Real Time Large Deformation Character Skinning in Hardware, ACM SIGGRAPH Symposium on Computer Animation, pp. 153-160, 2002.
ABSTRACT:  We present a technique which allows subtle nonlinear quasi-static deformations of articulated characters to be compactly approximated by data-dependent eigenbases which are optimized for real time rendering on commodity graphics hardware. The method extends the common Skeletal-Subspace Deformation (SSD) technique to provide efficient approximations of the complex deformation behaviours exhibited in simulated, measured, and artist-drawn characters. Instead of storing displacements for key poses (which may be numerous), we precompute principal components of the deformation influences for individual kinematic joints, and so construct error-optimal eigenbases describing each joint's deformation subspace. Pose-dependent deformations are then expressed in terms of these reduced eigenbases, allowing precomputed coefficients of the eigenbasis to be interpolated at run time. Vertex program hardware can then efficiently render nonlinear skin deformations using a small number of eigendisplacements stored in graphics hardware.  We refer to the final resulting character skinning construct as the model's EigenSkin. Animation results are presented for a very large nonlinear finite element model of a human hand rendered in real time at minimal cost to the main CPU.
Doug L. James and Dinesh K. Pai, DyRT: Dynamic Response Textures for Real Time Deformation Simulation with Graphics Hardware, ACM Transactions on Graphics (ACM SIGGRAPH 2002), 21(3), pp. 582-585, 2002.
    ABSTRACT:  In this paper we describe how to simulate geometrically complex, interactive, physically-based, volumetric, dynamic deformation models with negligible main CPU costs. This is achieved using a Dynamic Response Texture, or DyRT, that can be mapped onto any conventional animation as an optional rendering stage using commodity graphics hardware. The DyRT simulation process employs precomputed modal vibration models excited by rigid body motions. We present several examples, with an emphasis on bone-based character animation for interactive applications.
  • PAPER (pdf, 2.2MB)
  • CODE (for modal analysis)
  • Full length DyRT video  (mpg, 24MB)
    Excerpt:  DyRT-Man jumping (avi [mpg4], 700K)
    Excerpt:  Surgical simulation (mpg, 3MB)
Multizone precomputed Green's function
Doug L. James and Dinesh K. Pai, Real Time Simulation of Multizone Elastokinematic Models, 2002 IEEE Intl. Conference on Robotics and Automation, Washington DC, May 2002.
ABSTRACT:  We introduce precomputed multizone elastokinematic models for interactive simulation of multibody kinematic systems which include elastostatic deformations. This enables an efficient form of domain decomposition, suitable for interactive simulation of stiff flexible structures for real time applications such as interactive assembly. One advantage of multizone models is that each zone can have small strains, and hence be modeled with linear elasticity, while the entire multizone/multibody system admits large nonlinear relative strains. This permits fast capacitance matrix algorithms and precomputed Green's functions to be used for efficient real time simulation. Examples are given for a human finger modeled as a kinematic chain with a compliant elastic covering.
  • PAPER (pdf, 0.8MB)
    Finger motion (avi, 4.7MB)
    Elastokinematic contact (avi [mpg4], 2.5MB) 
    Elastokinematic Contact
    Elastokinematic contact (avi-DivX, 2.8MB)
Doug L. James and Dinesh K. Pai, Multiresolution Green's Function Methods for Interactive Simulation of Large-scale Elastostatic Objects, ACM Transactions on Graphics, 22(1), pp. 47-82, 2003.
    ABSTRACT:  We present a framework for low-latency interactive simulation of linear elastostatic models, and other systems arising from linear elliptic partial differential equations, which makes it feasible to interactively simulate large-scale physical models. The deformation of the models is described using precomputed Green's functions (GFs), and runtime boundary value problems (BVPs) are solved using existing Capacitance Matrix Algorithms (CMAs). Multiresolution techniques are introduced to control the amount of information input and output from the solver thus making it practical to simulate and store very large models. A key component is the efficient compressed representation of the precomputed GFs using second-generation wavelets on surfaces. This aids in reducing the large memory requirement of storing the dense GF matrix, and the fast inverse wavelet transform allows for fast summation methods to be used at runtime for response synthesis. Resulting GF compression factors are directly related to interactive simulation speedup, and examples are provided with hundredfold improvements at modest error levels. We also introduce a multiresolution constraint satisfaction technique formulated as an hierarchical CMA, so named because of its use of hierarchical GFs describing the response due to hierarchical basis constraints. This direct solution approach is suitable for hard real time simulation since it provides a mechanism for gracefully degrading to coarser resolution constraint approximations. The GFs' multiresolution displacement fields also allow for runtime adaptive multiresolution rendering.
  • PAPER: Preprint (pdf, 9.4MB) or final ACM Digital Library link.
  • VIDEOS: Real time force feedback simulations (using Java-based ARTDEFO simulator with Phantom haptic interface):
    Rabbit: Full L=4 wavelet GF model (mpg, 4.7MB)
    Dragon: Wavelet hierarchical GF model (mpg, 5.6MB)
    Both: (Hires avi-DivX, 4.9MB)
Doug L. James, Multiresolution Green's Function Methods for Interactive Simulation of Large-scale Elastostatic Objects and other Physical Systems in Equilibrium, Ph.D. Thesis, Institute of Applied Mathematics, UBC, 2001.
   This thesis presents a framework for low-latency interactive simulation of linear elastostatic models and other systems associated with linear elliptic partial differential equations. This approach makes it feasible to interactively simulate large-scale physical models.
   Linearity is exploited by formulating the boundary value problem (BVP) solution in terms of Green’s functions (GFs) which may be precomputed to provide speed and cheap lookup operations. Runtime BVPs are solved using a collection of Capacitance Matrix Algorithms (CMAs) based on the Sherman-Morrison-Woodbury formula. Temporal coherence is exploited by caching and reusing, as well as sequentially updating, previous capacitance matrix inverses.
   Multiresolution enhancements make it practical to simulate and store very large models. Efficient compressed representations of precomputed GFs are obtained using second-generation wavelets defined on surfaces. Fast inverse wavelet transforms allow fast summation methods to be used to accelerate runtime BVP solution. Wavelet GF compression factors are directly related to interactive simulation speedup, and examples are provided with hundredfold improvements at modest error levels. Furthermore, hierarchical constraints are defined using hierarchical basis functions, and related hierarchical GFs are then used to construct an hierarchical CMA. This direct solution approach is suitable for hard real time simulation since it provides a mechanism for gracefully degrading to coarser resolution approximations, and the wavelet representations allow for runtime adaptive multiresolution rendering.
   These GF CMAs are well-suited to interactive haptic applications since GFs allow random access to solution components and the capacitance matrix is the contact compliance used for high-fidelity force feedback rendering. Examples are provided for distributed and point-like interactions.
   Precomputed multizone kinematic GF models are also considered, with examples provided for character animation in computer graphics.
   Finally, we briefly discuss the generation of multiresolution GF models using either numerical precomputation methods or reality-based robotic measurement.
  • THESIS (pdf, 18MB)
  • Exam programme (pdf, 200K)
  • Dinesh K. Pai, Kees van den Doel, Doug L. James, Jochen Lang,John E. Lloyd, Joshua L.  Richmond, Som H.  Yau, Scanning Physical Interaction Behavior of 3D Objects, Proceedings of ACM SIGGRAPH 2001, pp. 87-96, 2001. 
    ABSTRACT:  We describe a system for constructing computer models of several aspects of physical interaction behavior, by scanning the response of real objects. The behaviors we can successfully scan and model include deformation response, contact textures for interaction with force-feedback, and contact sounds. The system we describe uses a highly automated robotic facility that can scan behavior models of whole objects. We provide a comprehensive view of the modeling process, including selection of model structure, measurement, estimation, and rendering at interactive rates. The results are demonstrated with two examples: a soft stuffed toy which has significant deformation behavior, and a hard clay pot which has significant contact textures and sounds.  The results described here make it possible to quickly construct physical interaction models of objects for applications in games, animation, and e-commerce.
  • PAPER (pdf, 1.5MB)
  • VIDEO (mpg, 16MB)
  • Defo Demo Events:
    • Precarn-IRIS Annual Conference on Intelligent Systems, Ottawa, June 4-5, 2001.  (best demo)
    • IEEE Intl. Conference on Computer Vision, Vancouver, July 9-12, 2001.
  • COMMENT:  Green's function descriptions of linear elastostatic models are inherently well suited to reality-based modeling. Using the UBC Active Measurement Facility (ACME) we have robotically automated the acquisition of real deformable models by directly measuring quantities related to Green's functions (see Jochen Lang's Ph.D. thesis). Once reconstructed, the models may be interactived with using fast Green's function simulation techniques. For this team project, I also worked on the reconstruction of textured multiresolution meshes from range data, and subsequent rendering of deformations using textured displaced subdivision surfaces.
  •    Doug L. James and Dinesh K. Pai, A Unified Treatment of Elastostatic Contact Simulation for Real Time Haptics, Haptics-e, The Electronic Journal of Haptics Research (, Vol. 2, Number 1, September 27, 2001.
      ABSTRACT:  We describe real-time, physically-based simulation algorithms for haptic interaction with elastic objects. Simulation of contact with elastic objects has been a challenge, due to the complexity of physically accurate simulation and the difficulty of constructing useful approximations suitable for real time interaction. We show that this challenge can be effectively solved for many applications. In particular global deformation of linear elastostatic objects can be efficiently solved with low run-time computational costs, using precomputed Green's functions and fast low-rank updates based on Capacitance Matrix Algorithms. The capacitance matrices constitute exact force response models, allowing contact forces to be computed much faster than global deformation behavior. Vertex pressure masks are introduced to support the convenient abstraction of localized scale-specific point-like contact with an elastic and/or rigid surface approximated by a polyhedral mesh. Finally, we present several examples using the CyberGlove and PHANToM haptic interfaces.
    • PAPER (pdf, 1.6MB)
    • VIDEOS:
    • CyberGlove-based grasping
      (avi: 160x120 [3MB] or 320x240 [10MB])
      Early PHANToM demos (avi, 9MB)
      Funky bicycle banana seat (whoa?!) (mpg, 2.5MB)
    Doug L. James and Dinesh K. Pai, ARTDEFO: Accurate Real Time Deformable Objects, Proceedings of ACM SIGGRAPH 99, pp. 65-72, 1999.
    ABSTRACT:  We present an algorithm for fast, physically accurate simulation of deformable objects suitable for real time animation and virtual environment interaction. We describe the boundary integral equation formulation of static linear elasticity as well as the related Boundary Element Method (BEM) discretization technique. In addition, we show how to exploit the coherence of typical interactions to achieve low latency; the boundary formulation lends itself well to a fast update method when a few boundary conditions change. The algorithms are described in detail with examples from ArtDefo, our implementation.
  • PAPER (pdf, 1.2MB) 
  • VIDEO (avi, 6MB)
  •  VIDEO (mov, 1.1MB)
  • ARTDEFO Picture Gallery! (WARNING: This is ancient)
  • Home

    This material is based upon work supported by the National Science Foundation under Grant No. 0347740.
    Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.