Search Results

Now showing 1 - 10 of 14
  • Item
    A Gaze Detection System for Neuropsychiatric Disorders Remote Diagnosis Support
    (The Eurographics Association, 2023) Cangelosi, Antonio; Antola, Gabriele; Iacono, Alberto Lo; Santamaria, Alfonso; Clerico, Marinella; Al-Thani, Dena; Agus, Marco; Calì, Corrado; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, Gilda
    Accurate and early diagnosis of neuropsychiatric disorders, such as Autism Spectrum Disorders (ASD) is a significant challenge in clinical practice. This study explores the use of real-time gaze tracking as a tool for unbiased and quantitative analysis of eye gaze. The results of this study could support the diagnosis of disorders and potentially be used as a tool in the field of rehabilitation. The proposed setup consists of an RGB-D camera embedded in the latest-generation smartphones and a set of processing components for the analysis of recorded data related to patient interactivity. The proposed system is easy to use and doesn't require much knowledge or expertise. It also achieves a high level of accuracy. Because of this, it can be used remotely (telemedicine) to simplify diagnosis and rehabilitation processes. We present initial findings that show how real-time gaze tracking can be a valuable tool for doctors. It is a non-invasive device that provides unbiased quantitative data that can aid in early detection, monitoring, and treatment evaluation. This study's findings have significant implications for the advancement of ASD research. The innovative approach proposed in this study has the potential to enhance diagnostic accuracy and improve patient outcomes.
  • Item
    Practical Line Rasterization for Multi-resolution Textures
    (The Eurographics Association, 2014) Taibo, Javier; Jaspe, Alberto; Seoane, Antonio; Agus, Marco; Hernández, Luis; Andrea Giachetti
    Draping 2D vectorial information over a 3D terrain elevation model is usually performed by real-time rendering to texture. In the case of linear feature representation, there are several specific problems using the texturing approach, specially when using multi-resolution textures. These problems are related to visual quality, aliasing artifacts and rendering performance. In this paper, we address the problems of 2D line rasterization on a multi-resolution texturing engine from a pragmatical point of view; some alternative solutions are presented, compared and evaluated. For each solution we have analyzed the visual quality, the impact on the rendering performance and the memory consumption. The study performed in this work is based on an OpenGL implementation of a clipmap-based multi-resolution texturing system, and is oriented towards the use of inexpensive consumer graphics hardware.
  • Item
    SPIDER: SPherical Indoor DEpth Renderer
    (The Eurographics Association, 2022) Tukur, Muhammad; Pintore, Giovanni; Gobbetti, Enrico; Schneider, Jens; Agus, Marco; Cabiddu, Daniela; Schneider, Teseo; Allegra, Dario; Catalano, Chiara Eva; Cherchi, Gianmarco; Scateni, Riccardo
    Today's Extended Reality (XR) applications that call for specific Diminished Reality (DR) strategies to hide specific classes of objects are increasingly using 360? cameras, which can capture entire areas in a single picture. In this work, we present an interactive-based image editing and rendering system named SPIDER, that takes a spherical 360? indoor scene as input. The system incorporates the output of deep learning models to abstract the segmentation and depth images of full and empty rooms to allow users to perform interactive exploration and basic editing operations on the reconstructed indoor scene, namely: i) rendering of the scene in various modalities (point cloud, polygonal, wireframe) ii) refurnishing (transferring portions of rooms) iii) deferred shading through the usage of precomputed normal maps. These kinds of scene editing and manipulations can be used for assessing the inference from deep learning models and enable several Mixed Reality (XR) applications in areas such as furniture retails, interior designs, and real estates. Moreover, it can also be useful in data augmentation, arts, designs, and paintings.
  • Item
    STAG 2019: Frontmatter
    (Eurographics Association, 2019) Agus, Marco; Corsini, Massimiliano; Pintus, Ruggero; Agus, Marco and Corsini, Massimiliano and Pintus, Ruggero
  • Item
    Towards Advanced Volumetric Display of the Human Musculoskeletal System
    (The Eurographics Association, 2008) Agus, Marco; Giachetti, Andrea; Gobbetti, Enrico; Guitián, José Antonio Iglesias; Marton, Fabio; Vittorio Scarano and Rosario De Chiara and Ugo Erra
    We report on our research results on effective volume visualization techniques for medical and anatomical data. Our volume rendering approach employs GPU accelerated out-of-core direct rendering algorithms to fully support high resolution, 16 bits, raw medical datasets as well as segmentation. Images can be presented on a special light field display based on projection technology. Human anatomical data appear to moving viewers floating in the light field display space and can be interactively manipulated.
  • Item
    Visual Analysis of Glycogen Derived Lactate Absorption in Dense and Sparse Surface Reconstructions of Rodent Brain Structures
    (The Eurographics Association, 2017) Calì, Corrado; Agus, Marco; Gagnon, Nicholas; Hadwiger, Markus; Magistretti, Pierre J.; Andrea Giachetti and Paolo Pingi and Filippo Stanco
    Astrocytes are the most abundant type of glial cells of the central nervous system; their involvement in brain functioning, from synaptic to network level, is to date a matter of intense research. A well-established function of astroglial cells, among others, is the metabolic support of neurons. Recently, it has been shown that during tasks like learning and long-term memory formation, synapses sustain their metabolic needs using lactate, a compound that astrocytes can synthesize from glycogen, a molecule that stores glucose, rather than glucose itself. Aforementioned role of astrocytes, as energy reservoir to neurons, is challenging the classic paradigms of neuro-energetic research. Understanding their morphology at nano-scale resolution is therefore a fundamental research challenge with enormous implications on many branches of neuroscience research, such as the study of neuro-degenerative and cognitive disorders. Here, we present an illustrative visualization technique customized for the analysis of the interaction of astrocytic glycogen on surrounding neurites in order to formulate hypotheses on the energy absorption mechanisms. The method integrates a high-resolution surface reconstruction of neurites and the energy sources in form of glycogen granules, and computes an absorption map according to a radiance transfer mechanism. The technique is built on top of a framework for processing and rendering triangulated surface models, and it is used for real-time 3D exploration and inspection of the neural structures paired with the energy sources. The resulting visual representation provides an immediate and comprehensible illustration of the areas in which the probability of lactate shuttling is higher. This method has been further employed for testing neuroenergetics hypotheses about the utilization of glycogen during synaptic development.
  • Item
    SlowDeepFood: a Food Computing Framework for Regional Gastronomy
    (The Eurographics Association, 2021) Gilal, Nauman Ullah; Al-Thelaya, Khaled; Schneider, Jens; She, James; Agus, Marco; Frosini, Patrizio and Giorgi, Daniela and Melzi, Simone and Rodolà, Emanuele
    Food computing recently emerged as a stand-alone research field, in which artificial intelligence, deep learning, and data science methodologies are applied to the various stages of food production pipelines. Food computing may help end-users in maintaining healthy and nutritious diets by alerting of high caloric dishes and/or dishes containing allergens. A backbone for such applications, and a major challenge, is the automated recognition of food by means of computer vision. It is therefore no surprise that researchers have compiled various food data sets and paired them with well-performing deep learning architecture to perform said automatic classification. However, local cuisines are tied to specific geographic origins and are woefully underrepresented in most existing data sets. This leads to a clear gap when it comes to food computing on regional and traditional dishes. While one might argue that standardized data sets of world cuisine cover the majority of applications, such a stance would neglect systematic biases in data collection. It would also be at odds with recent initiatives such as SlowFood, seeking to support local food traditions and to preserve local contributions to the global variation of food items. To help preserve such local influences, we thus present a full end-to-end food computing network that is able to: (i) create custom image data sets semi-automatically that represent traditional dishes; (ii) train custom classification models based on the EfficientNet family using transfer learning; (iii) deploy the resulting models in mobile applications for real-time inference of food images acquired through smart phone cameras. We not only assess the performance of the proposed deep learning architecture on standard food data sets (e.g., our model achieves 91:91% accuracy on ETH’'s Food-101), but also demonstrate the performance of our models on our own, custom data sets comprising local cuisine, such as the Pizza-Styles data set and GCC-30. The former comprises 14 categories of pizza styles, whereas the latter contains 30 Middle Eastern dishes from the Gulf Cooperation Council members.
  • Item
    Evaluating AI-based static stereoscopic rendering of indoor panoramic scenes
    (The Eurographics Association, 2024) Jashari, Sara; Tukur, Muhammad; Boraey, Yehia; Alzubaidi, Mahmood; Pintore, Giovanni; Gobbetti, Enrico; Villanueva, Alberto Jaspe; Schneider, Jens; Fetais, Noora; Agus, Marco; Caputo, Ariel; Garro, Valeria; Giachetti, Andrea; Castellani, Umberto; Dulecha, Tinsae Gebrechristos
    Panoramic imaging has recently become an extensively used technology for the representation and exploration of indoor environments. Panoramic cameras generate omnidirectional images that provide a comprehensive 360-degree view, making them a valuable tool for applications such as virtual tours in real estate, architecture, and cultural heritage. However, constructing truly immersive experiences from panoramic images presents challenges, particularly in generating panoramic stereo pairs that offer consistent depth cues and visual comfort across all viewing directions. Traditional stereo-imaging techniques do not directly apply to spherical panoramic images, requiring complex processing to avoid artifacts that can disrupt immersion. To address these challenges, various imaging and processing technologies have been developed, including multi-camera systems and computational methods that generate stereo images from a single panoramic input. Although effective, these solutions often involve complicated hardware and processing pipelines. Recently, deep learning approaches have emerged, enabling novel view generation from single panoramic images. While these methods show promise, they have not yet been thoroughly evaluated in practical scenarios. This paper presents a series of evaluation experiments aimed at assessing different technologies for creating static stereoscopic environments from omnidirectional imagery, with a focus on 3DOF immersive exploration. A user study was conducted using a WebXR prototype and a Meta Quest 3 headset to quantitatively and qualitatively compare traditional image composition techniques with AI-based methods. Our results indicate that while traditional methods provide a satisfactory level of immersion, AI-based generation is nearing a quality level suitable for deployment in web-based environments.
  • Item
    DDD: Deep indoor panoramic Depth estimation with Density maps consistency
    (The Eurographics Association, 2024) Pintore, Giovanni; Agus, Marco; Signoroni, Alberto; Gobbetti, Enrico; Caputo, Ariel; Garro, Valeria; Giachetti, Andrea; Castellani, Umberto; Dulecha, Tinsae Gebrechristos
    We introduce a novel deep neural network for rapid and structurally consistent monocular 360◦ depth estimation in indoor environments. The network infers a depth map from a single gravity-aligned or gravity-rectified equirectangular image of the environment, ensuring that the predicted depth aligns with the typical depth distribution and features of cluttered interior spaces, which are usually enclosed by walls, ceilings, and floors. By leveraging the distinct characteristics of vertical and horizontal features in man-made indoor environments, we introduce a lean network architecture that employs gravity-aligned feature flattening and specialized vision transformers that utilize the input's omnidirectional nature, without segmentation into patches and positional encoding. To enhance the structural consistency of the predicted depth, we introduce a new loss function that evaluates the consistency of density maps by projecting points derived from the inferred depth map onto horizontal and vertical planes. This lightweight architecture has very small computational demands, provides greater structural consistency than competing methods, and does not require the explicit imposition of strong structural priors.
  • Item
    Visual Enhancements for Improved Interactive Rendering on Light Field Displays
    (The Eurographics Association, 2011) Agus, Marco; Pintore, Giovanni; Marton, Fabio; Gobbetti, Enrico; Zorcolo, Antonio; Andrea F. Abate and Michele Nappi and Genny Tortora
    Rendering of complex scenes on a projector-based light field display requires 3D content adaptation in order to provide comfortable viewing experiences in all conditions. In this paper we report about our approach to improve visual experiences while coping with the limitations in the effective field of depth and the angular field of view of the light field display. We present adaptation methods employing non-linear depth mapping and depth of field simulation which leave large parts of the scene unmodified, while modifying the other parts in a non-intrusive way. The methods are integrated in an interactive visualization system for the inspection of massive models on a large scale 35MPixel light field display. Preliminary results of subjective evaluation demonstrate that our rendering adaptation techniques improve visual comfort without affecting the overall depth perception.