Scherfgen, DavidSchild, JonasMaiero, Jens and Weier, Martin and Zielasko, Daniel2021-09-072021-09-072021978-3-03868-159-51727-530Xhttps://doi.org/10.2312/egve.20211334https://diglib.eg.org:443/handle/10.2312/egve20211334Virtual medical emergency training provides complex while safe interactions with virtual patients. Haptically integrating a medical manikin into virtual training has the potential to improve the interaction with a virtual patient and the training experience. We present a system that estimates the 3D pose of a medical manikin in order to haptically augment a human model in a virtual reality training environment, allowing users to physically touch a virtual patient. The system uses an existing convolutional neural network-based (CNN) body keypoint detector to locate relevant 2D keypoints of the manikin in the images of the stereo camera built into a head-mounted display. The manikin's position, orientation and joint angles are found by non-linear optimization. A preliminary analysis reports an error of 4.3 cm. The system is not yet capable of real-time processing.Human centered computingMixed / augmented realityVirtual realityEstimating the Pose of a Medical Manikin for Haptic Augmentation of a Virtual Patient in Mixed Reality Training10.2312/egve.202113343-4