SCA 2021: Eurographics/SIGGRAPH Symposium on Computer Animation
Permanent URI for this collection
Browse
Browsing SCA 2021: Eurographics/SIGGRAPH Symposium on Computer Animation by Author "Erleben, Kenny"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Coupling Friction with Visual Appearance(ACM, 2021) Andrews, Sheldon; Nassif, Loic; Erleben, Kenny; Kry, Paul G.; Narain, Rahul and Neff, Michael and Zordan, VictorWe present a novel meso-scale model for computing anisotropic and asymmetric friction for contacts in rigid body simulations that is based on surface facet orientations. The main idea behind our approach is to compute a direction dependent friction coefficient that is determined by an object's roughness. Specifically, where the friction is dependent on asperity interlocking, but at a scale where surface roughness is also a visual characteristic of the surface. A GPU rendering pipeline is employed to rasterize surfaces using a shallow depth orthographic projection at each contact point in order to sample facet normal information from both surfaces, which we then combine to produce direction dependent friction coefficients that can be directly used in typical LCP contact solvers, such as the projected Gauss-Seidel method. We demonstrate our approach with a variety of rough textures, where the roughness is both visible in the rendering and in the motion produced by the physical simulation.Item Global Position Prediction for Interactive Motion Capture(ACM, 2021) Schreiner, Paul; Perepichka, Maksym; Lewis, Hayden; Darkner, Sune; Kry, Paul G.; Erleben, Kenny; Zordan, Victor B.; Narain, Rahul and Neff, Michael and Zordan, VictorWe present a method for reconstructing the global position of motion capture where position sensing is poor or unavailable. Capture systems, such as IMU suits, can provide excellent pose and orientation data of a capture subject, but otherwise need post processing to estimate global position. We propose a solution that trains a neural network to predict, in real-time, the height and body displacement given a short window of pose and orientation data. Our training dataset contains pre-recorded data with global positions from many different capture subjects, performing a wide variety of activities in order to broadly train a network to estimate on like and unseen activities. We compare training on two network architectures, a universal network (u-net) and a traditional convolutional neural network (CNN) - observing better error properties for the u-net in our results. We also evaluate our method for different classes of motion. We observe high quality results for motion examples with good representation in specialized datasets, while general performance appears better in a more broadly sampled dataset when input motions are far from training examples.