Eurographics Local Chapter Events
Permanent URI for this community
Browse
Browsing Eurographics Local Chapter Events by Author "Agus, Marco"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Frontmatter: STAG 2018: Smart Tools and Applications in computer Graphics(The Eurographics Association, 2018) Signoroni, Alberto; Livesu, Marco; Agus, Marco; Livesu, Marco and Pintore, Gianni and Signoroni, AlbertoItem A Gaze Detection System for Neuropsychiatric Disorders Remote Diagnosis Support(The Eurographics Association, 2023) Cangelosi, Antonio; Antola, Gabriele; Iacono, Alberto Lo; Santamaria, Alfonso; Clerico, Marinella; Al-Thani, Dena; Agus, Marco; Calì, Corrado; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaAccurate and early diagnosis of neuropsychiatric disorders, such as Autism Spectrum Disorders (ASD) is a significant challenge in clinical practice. This study explores the use of real-time gaze tracking as a tool for unbiased and quantitative analysis of eye gaze. The results of this study could support the diagnosis of disorders and potentially be used as a tool in the field of rehabilitation. The proposed setup consists of an RGB-D camera embedded in the latest-generation smartphones and a set of processing components for the analysis of recorded data related to patient interactivity. The proposed system is easy to use and doesn't require much knowledge or expertise. It also achieves a high level of accuracy. Because of this, it can be used remotely (telemedicine) to simplify diagnosis and rehabilitation processes. We present initial findings that show how real-time gaze tracking can be a valuable tool for doctors. It is a non-invasive device that provides unbiased quantitative data that can aid in early detection, monitoring, and treatment evaluation. This study's findings have significant implications for the advancement of ASD research. The innovative approach proposed in this study has the potential to enhance diagnostic accuracy and improve patient outcomes.Item Immersive Environment for Creating, Proofreading, and Exploring Skeletons of Nanometric Scale Neural Structures(The Eurographics Association, 2019) Boges, Daniya; Calì, Corrado; Magistretti, Pierre J.; Hadwiger, Markus; Sicat, Ronell; Agus, Marco; Agus, Marco and Corsini, Massimiliano and Pintus, RuggeroWe present a novel immersive environment for the exploratory analysis of nanoscale cellular reconstructions of rodent brain samples acquired through electron microscopy. The system is focused on medial axis representations (skeletons) of branched and tubular structures of brain cells, and it is specifically designed for: i) effective semi-automatic creation of skeletons from surface-based representations of cells and structures, ii) fast proofreading, i.e., correcting and editing of semi-automatically constructed skeleton representations, and iii) useful exploration, i.e., measuring, comparing, and analyzing geometric features related to cellular structures based on medial axis representations. The application runs in a standard PC-tethered virtual reality (VR) setup with a head mounted display (HMD), controllers, and tracking sensors. The system is currently used by neuroscientists for performing morphology studies on sparse reconstructions of glial cells and neurons extracted from a sample of the somatosensory cortex of a juvenile rat.Item Mixed Reality for Orthopedic Elbow Surgery Training and Operating Room Applications: A Preliminary Analysis(The Eurographics Association, 2023) Cangelosi, Antonio; Riberi, Giacomo; Salvi, Massimo; Molinari, Filippo; Titolo, Paolo; Agus, Marco; Calì, Corrado; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaThe use of Mixed Reality in medicine is widely documented to be a candidate to revolutionize surgical interventions. In this paper we present a system to simulate k-wire placement, that is a common orthopedic procedure used to stabilize fractures, dislocations, and other traumatic injuries.With the described system, it is possible to leverage Mixed Reality (MR) and advanced visualization techniques applied on a surgical simulation phantom to enhance surgical training and critical orthopedic surgical procedures. This analysis is centered on evaluating the precision and proficiency of k-wire placement in an elbow surgical phantom, designed with a 3D modeling software starting from a virtual 3D anatomical reference. By visually superimposing 3D reconstructions of internal structures and the target K-wire positioning on the physical model, it is expected not only to improve the learning curve but also to establish a foundation for potential real-time surgical guidance in challenging clinical scenarios. The performance is measured as the difference between K-wires real placement in respect to target position; the quantitative measurements are then used to compare the risk of iatrogenic injury to nerves and vascular structures of MRguided vs non MR-guided simulated interventions.Item SlowDeepFood: a Food Computing Framework for Regional Gastronomy(The Eurographics Association, 2021) Gilal, Nauman Ullah; Al-Thelaya, Khaled; Schneider, Jens; She, James; Agus, Marco; Frosini, Patrizio and Giorgi, Daniela and Melzi, Simone and Rodolà, EmanueleFood computing recently emerged as a stand-alone research field, in which artificial intelligence, deep learning, and data science methodologies are applied to the various stages of food production pipelines. Food computing may help end-users in maintaining healthy and nutritious diets by alerting of high caloric dishes and/or dishes containing allergens. A backbone for such applications, and a major challenge, is the automated recognition of food by means of computer vision. It is therefore no surprise that researchers have compiled various food data sets and paired them with well-performing deep learning architecture to perform said automatic classification. However, local cuisines are tied to specific geographic origins and are woefully underrepresented in most existing data sets. This leads to a clear gap when it comes to food computing on regional and traditional dishes. While one might argue that standardized data sets of world cuisine cover the majority of applications, such a stance would neglect systematic biases in data collection. It would also be at odds with recent initiatives such as SlowFood, seeking to support local food traditions and to preserve local contributions to the global variation of food items. To help preserve such local influences, we thus present a full end-to-end food computing network that is able to: (i) create custom image data sets semi-automatically that represent traditional dishes; (ii) train custom classification models based on the EfficientNet family using transfer learning; (iii) deploy the resulting models in mobile applications for real-time inference of food images acquired through smart phone cameras. We not only assess the performance of the proposed deep learning architecture on standard food data sets (e.g., our model achieves 91:91% accuracy on ETH’'s Food-101), but also demonstrate the performance of our models on our own, custom data sets comprising local cuisine, such as the Pizza-Styles data set and GCC-30. The former comprises 14 categories of pizza styles, whereas the latter contains 30 Middle Eastern dishes from the Gulf Cooperation Council members.Item SPIDER: SPherical Indoor DEpth Renderer(The Eurographics Association, 2022) Tukur, Muhammad; Pintore, Giovanni; Gobbetti, Enrico; Schneider, Jens; Agus, Marco; Cabiddu, Daniela; Schneider, Teseo; Allegra, Dario; Catalano, Chiara Eva; Cherchi, Gianmarco; Scateni, RiccardoToday's Extended Reality (XR) applications that call for specific Diminished Reality (DR) strategies to hide specific classes of objects are increasingly using 360? cameras, which can capture entire areas in a single picture. In this work, we present an interactive-based image editing and rendering system named SPIDER, that takes a spherical 360? indoor scene as input. The system incorporates the output of deep learning models to abstract the segmentation and depth images of full and empty rooms to allow users to perform interactive exploration and basic editing operations on the reconstructed indoor scene, namely: i) rendering of the scene in various modalities (point cloud, polygonal, wireframe) ii) refurnishing (transferring portions of rooms) iii) deferred shading through the usage of precomputed normal maps. These kinds of scene editing and manipulations can be used for assessing the inference from deep learning models and enable several Mixed Reality (XR) applications in areas such as furniture retails, interior designs, and real estates. Moreover, it can also be useful in data augmentation, arts, designs, and paintings.Item STAG 2019: Frontmatter(Eurographics Association, 2019) Agus, Marco; Corsini, Massimiliano; Pintus, Ruggero; Agus, Marco and Corsini, Massimiliano and Pintus, Ruggero