4 results
Search Results
Now showing 1 - 4 of 4
Item A Mixed Reality Application for Multi-Floor Building Evacuation Drills using Real-Time Pathfinding and Dynamic 3D Modeling(The Eurographics Association, 2024) Manfredi, Gilda; Capece, Nicola; Carlo, Rosario Pio Di; Erra, Ugo; Caputo, Ariel; Garro, Valeria; Giachetti, Andrea; Castellani, Umberto; Dulecha, Tinsae GebrechristosIn modern high-rise buildings, complex layouts and frequent structural changes often hinder emergency evacuation. Traditional evacuation plans, usually 2D diagrams, do not provide real-time guidance and are difficult for occupants to interpret. We propose a Mixed Reality (MR) application to address these challenges in real-time evacuation in multi-floor buildings. This application was developed on Meta Quest 3, chosen for its status as one of the best low-cost eXtended Reality (XR) headsets and a popular standalone Head-Mounted Display (HMD). Our system allows users to rapidly rescan and update building models, ensuring that evacuation guidance is always up-to-date. The proposed approach overcomes the Meta Quest 3 API's limitation of scanning only 15 rooms. It extends its capability by saving room data externally and using spatial anchors to maintain accurate alignment with the physical environment. Additionally, the application integrates Dijkstra's algorithm to dynamically calculate optimal escape routes based on the user's real-time location. A preliminary evaluation study demonstrates the application's effectiveness in enhancing situational awareness and enabling users to stay mentally sharp, highlighting its potential to improve decision-making and emergency response in dynamic building environments significantly.Item Smart Tools and Applications in Graphics - Eurographics Italian Chapter Conference: Frontmatter(The Eurographics Association, 2023) Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, Gilda; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaItem Exploring Upper Limb Segmentation with Deep Learning for Augmented Virtuality(The Eurographics Association, 2021) Gruosso, Monica; Capece, Nicola; Erra, Ugo; Frosini, Patrizio and Giorgi, Daniela and Melzi, Simone and Rodolà, EmanueleSense of presence, immersion, and body ownership are among the main challenges concerning Virtual Reality (VR) and freehand-based interaction methods. Through specific hand tracking devices, freehand-based methods can allow users to use their hands for VE interaction. To visualize and make easy the freehand methods, recent approaches take advantage of 3D meshes to represent the user's hands in VE. However, this can reduce user immersion due to their unnatural correspondence with the real hands. We propose an augmented virtuality (AV) pipeline allows users to visualize their limbs in VE to overcome this limit. In particular, they were captured by a single monocular RGB camera placed in an egocentric perspective, segmented using a deep convolutional neural network (CNN), and streamed in the VE. In addition, hands were tracked through a Leap Motion controller to allow user interaction. We introduced two case studies as a preliminary investigation for this approach. Finally, both quantitative and qualitative evaluations of the CNN results were provided and highlighted the effectiveness of the proposed CNN achieving remarkable results in several real-life unconstrained scenarios.Item AvatarizeMe: A Fast Software Tool for Transforming Selfies into Animatable Lifelike Avatars Using Machine Learning(The Eurographics Association, 2023) Manfredi, Gilda; Capece, Nicola; Erra, Ugo; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaCreating realistic avatars that faithfully replicate facial features from single-input images is a challenging task in computer graphics, virtual communication, and interactive entertainment. These avatars have the potential to revolutionize virtual experiences by enhancing user engagement and personalization. However, existing methods, such as 3D facial capture systems, are costly and complex. Our approach adopts the 3D Morphable Face Model (3DMM) method to create avatars with remarkably realistic features in a bunch of seconds, using only a single input image. Our method extends beyond facial shape resemblance; it meticulously generates both facial and bodily textures, enhancing overall likeness. Within Unreal Engine 5, our avatars come to life with real-time body and facial animations. This is made possible through a versatile skeleton for body and head movements and a suite of 52 face blendshapes, enabling the avatar to convey emotions and expressions with fidelity. This poster presents our approach, bridging the gap between reality and virtual representation, and opening doors to immersive virtual experiences with lifelike avatars.