A Novel Approach for Cooperative Motion Capture (COMOCAP)

dc.contributor.authorWelch, Gregoryen_US
dc.contributor.authorWang, Tianren
dc.contributor.authorBishop, Gary
dc.contributor.authorBruder, Gerd
dc.contributor.editorBruder, Gerd and Yoshimoto, Shunsuke and Cobb, Sueen_US
dc.date.accessioned2018-11-06T16:07:31Z
dc.date.available2018-11-06T16:07:31Z
dc.date.issued2018
dc.description.abstractConventional motion capture (MOCAP) systems, e.g., optical systems, typically perform well for one person, but less so for multiple people in close proximity. Measurement quality can decline with distance, and even drop out as source/sensor components are occluded by nearby people. Furthermore, conventional optical MOCAP systems estimate body posture using a global estimation approach employing cameras that are fixed in the environment, typically at a distance such that one person or object can easily occlude another, and the relative error between tracked objects in the scene can increase as they move farther from the cameras and/or closer to each other. Body-relative tracking approaches use body-worn sensors and/or sources to track limbs with respect to the head or torso, for example, taking advantage of the proximity of limbs to the body. We present a novel approach to MOCAP that combines and extends conventional global and body-relative approaches by distributing both sensing and active signaling over each person's body to facilitate body-relative (intra-user) MOCAP for one person and body-body (inter-user) MOCAP for multiple people, in an approach we call cooperative motion capture (COMOCAP). We support the validity of the approach with simulation results from a system comprised of acoustic transceivers (receiver-transmitter units) that provide inter-transceiver range measurements. Optical, magnetic, and other types of transceivers could also be used. Our simulations demonstrate the advantages of this approach to effectively improve accuracy and robustness to occlusions in situations of close proximity between multiple persons.en_US
dc.description.sectionheadersSensing and Rendering
dc.description.seriesinformationICAT-EGVE 2018 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments
dc.identifier.doi10.2312/egve.20181317
dc.identifier.isbn978-3-03868-058-1
dc.identifier.issn1727-530X
dc.identifier.pages73-80
dc.identifier.urihttps://doi.org/10.2312/egve.20181317
dc.identifier.urihttps://diglib.eg.org:443/handle/10.2312/egve20181317
dc.publisherThe Eurographics Associationen_US
dc.subjectHuman
dc.subjectcentered computing
dc.subjectMixed / augmented reality
dc.subjectVirtual reality
dc.subjectGraphics input devices
dc.subjectComputing methodologies
dc.subjectMotion capture
dc.subjectGraphics input devices
dc.subjectMixed / augmented reality
dc.subjectVirtual reality
dc.subjectMotion capture
dc.titleA Novel Approach for Cooperative Motion Capture (COMOCAP)en_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
073-080.pdf
Size:
1.66 MB
Format:
Adobe Portable Document Format
Collections