Search Results

Now showing 1 - 10 of 12
  • Item
    SynthPS: a Benchmark for Evaluation of Photometric Stereo Algorithms for Cultural Heritage Applications
    (The Eurographics Association, 2020) Dulecha, Tinsae Gebrechristos; Pintus, Ruggero; Gobbetti, Enrico; Giachetti, Andrea; Spagnuolo, Michela and Melero, Francisco Javier
    Photometric Stereo (PS) is a technique for estimating surface normals from a collection of images captured from a fixed viewpoint and with variable lighting. Over the years, several methods have been proposed for the task, trying to cope with different materials, lights, and camera calibration issues. An accurate evaluation and selection of the best PS methods for different materials and acquisition setups is a fundamental step for the accurate quantitative reconstruction of objects' shapes. In particular, it would boost quantitative reconstruction in the Cultural Heritage domain, where a large amount of Multi-Light Image Collections are captured with light domes or handheld Reflectance Transformation Imaging protocols. However, the lack of benchmarks specifically designed for this goal makes it difficult to compare the available methods and choose the most suitable technique for practical applications. An ideal benchmark should enable the evaluation of the quality of the reconstructed normals on the kind of surfaces typically captured in real-world applications, possibly evaluating performance variability as a function of material properties, light distribution, and image quality. The evaluation should not depend on light and camera calibration issues. In this paper, we propose a benchmark of this kind, SynthPS, which includes synthetic, physically-based renderings of Cultural Heritage object models with different assigned materials. SynthPS allowed us to evaluate the performance of classical, robust and learning-based Photometric Stereo approaches on different materials with different light distributions, also analyzing their robustness against errors typically arising in practical acquisition settings, including robustness against gamma correction and light calibration errors.
  • Item
    SPIDER: SPherical Indoor DEpth Renderer
    (The Eurographics Association, 2022) Tukur, Muhammad; Pintore, Giovanni; Gobbetti, Enrico; Schneider, Jens; Agus, Marco; Cabiddu, Daniela; Schneider, Teseo; Allegra, Dario; Catalano, Chiara Eva; Cherchi, Gianmarco; Scateni, Riccardo
    Today's Extended Reality (XR) applications that call for specific Diminished Reality (DR) strategies to hide specific classes of objects are increasingly using 360? cameras, which can capture entire areas in a single picture. In this work, we present an interactive-based image editing and rendering system named SPIDER, that takes a spherical 360? indoor scene as input. The system incorporates the output of deep learning models to abstract the segmentation and depth images of full and empty rooms to allow users to perform interactive exploration and basic editing operations on the reconstructed indoor scene, namely: i) rendering of the scene in various modalities (point cloud, polygonal, wireframe) ii) refurnishing (transferring portions of rooms) iii) deferred shading through the usage of precomputed normal maps. These kinds of scene editing and manipulations can be used for assessing the inference from deep learning models and enable several Mixed Reality (XR) applications in areas such as furniture retails, interior designs, and real estates. Moreover, it can also be useful in data augmentation, arts, designs, and paintings.
  • Item
    A Novel Approach for Exploring Annotated Data With Interactive Lenses
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Bettio, Fabio; Ahsan, Moonisa; Marton, Fabio; Gobbetti, Enrico; Borgo, Rita and Marai, G. Elisabeta and Landesberger, Tatiana von
    We introduce a novel approach for assisting users in exploring 2D data representations with an interactive lens. Focus-andcontext exploration is supported by translating user actions to the joint adjustments in camera and lens parameters that ensure a good placement and sizing of the lens within the view. This general approach, implemented using standard device mappings, overcomes the limitations of current solutions, which force users to continuously switch from lens positioning and scaling to view panning and zooming. Navigation is further assisted by exploiting data annotations. In addition to traditional visual markups and information links, we associate to each annotation a lens configuration that highlights the region of interest. During interaction, an assisting controller determines the next best lens in the database based on the current view and lens parameters and the navigation history. Then, the controller interactively guides the user's lens towards the selected target and displays its annotation markup. As only one annotation markup is displayed at a time, clutter is reduced. Moreover, in addition to guidance, the navigation can also be automated to create a tour through the data. While our methods are generally applicable to general 2D visualization, we have implemented them for the exploration of stratigraphic relightable models. The capabilities of our approach are demonstrated in cultural heritage use cases. A user study has been performed in order to validate our approach.
  • Item
    InShaDe: Invariant Shape Descriptors for Visual Analysis of Histology 2D Cellular and Nuclear Shapes
    (The Eurographics Association, 2020) Agus, Marco; Al-Thelaya, Khaled; Cali, Corrado; Boido, Marina M.; Yang, Yin; Pintore, Giovanni; Gobbetti, Enrico; Schneider, Jens; Kozlíková, Barbora and Krone, Michael and Smit, Noeska and Nieselt, Kay and Raidou, Renata Georgia
    We present a shape processing framework for visual exploration of cellular nuclear envelopes extracted from histology images. The framework is based on a novel shape descriptor of closed contours relying on a geodesically uniform resampling of discrete curves to allow for discrete differential-geometry-based computation of unsigned curvature at vertices and edges. Our descriptor is, by design, invariant under translation, rotation and parameterization. Moreover, it additionally offers the option for uniform-scale-invariance. The optional scale-invariance is achieved by scaling features to z-scores, while invariance under parameterization shifts is achieved by using elliptic Fourier analysis (EFA) on the resulting curvature vectors. These invariant shape descriptors provide an embedding into a fixed-dimensional feature space that can be utilized for various applications: (i) as input features for deep and shallow learning techniques; (ii) as input for dimension reduction schemes for providing a visual reference for clustering collection of shapes. The capabilities of the proposed framework are demonstrated in the context of visual analysis and unsupervised classification of histology images.
  • Item
    Automatic Surface Segmentation for Seamless Fabrication Using 4-axis Milling Machines
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Nuvoli, Stefano; Tola, Alessandro; Muntoni, Alessandro; Pietroni, Nico; Gobbetti, Enrico; Scateni, Riccardo; Mitra, Niloy and Viola, Ivan
    We introduce a novel geometry-processing pipeline to guide the fabrication of complex shapes from a single block of material using 4-axis CNC milling machines. This setup extends classical 3-axis CNC machining with an extra degree of freedom to rotate the object around a fixed axis. The first step of our pipeline identifies the rotation axis that maximizes the overall fabrication accuracy. Then we identify two height-field regions at the rotation axis's extremes used to secure the block on the rotation tool. We segment the remaining portion of the mesh into a set of height-fields whose principal directions are orthogonal to the rotation axis. The segmentation balances the approximation quality, the boundary smoothness, and the total number of patches. Additionally, the segmentation process takes into account the object's geometric features, as well as saliency information. The output is a set of meshes ready to be processed by off-the-shelf software for the 3-axis tool-path generation. We present several results to demonstrate the quality and efficiency of our approach to a range of inputs
  • Item
    Effective Interactive Visualization of Neural Relightable Images in a Web-based Multi-layered Framework
    (The Eurographics Association, 2023) Righetto, Leonardo; Bettio, Fabio; Ponchio, Federico; Giachetti, Andrea; Gobbetti, Enrico; Bucciero, Alberto; Fanini, Bruno; Graf, Holger; Pescarin, Sofia; Rizvic, Selma
    Relightable images created from Multi-Light Image Collections (MLICs) are one of the most commonly employed models for interactive object exploration in cultural heritage. In recent years, neural representations have been shown to produce higherquality images, at similar storage costs, with respect to the more classic analytical models such as Polynomial Texture Maps (PTM) or Hemispherical Harmonics (HSH). However, their integration in practical interactive tools has so far been limited due to the higher evaluation cost, making it difficult to employ them for interactive inspection of large images, and to the difficulty in integration cost, due to the need to incorporate deep-learning libraries in relightable renderers. In this paper, we illustrate how a state-of-the-art neural reflectance model can be directly evaluated, using common WebGL shader features, inside a multiplatform renderer. We then show how this solution can be embedded in a scalable framework capable to handle multi-layered relightable models in web settings. We finally show the performance and capabilities of the method on cultural heritage objects.
  • Item
    Guiding Lens-based Exploration using Annotation Graphs
    (The Eurographics Association, 2021) Ahsan, Moonisa; Marton, Fabio; Pintus, Ruggero; Gobbetti, Enrico; Frosini, Patrizio and Giorgi, Daniela and Melzi, Simone and Rodolà, Emanuele
    We introduce a novel approach for guiding users in the exploration of annotated 2D models using interactive visualization lenses. Information on the interesting areas of the model is encoded in an annotation graph generated at authoring time. Each graph node contains an annotation, in the form of a visual markup of the area of interest, as well as the optimal lens parameters that should be used to explore the annotated area and a scalar representing the annotation importance. Graph edges are used, instead, to represent preferred ordering relations in the presentation of annotations. A scalar associated to each edge determines the strength of this prescription. At run-time, the graph is exploited to assist users in their navigation by determining the next best annotation in the database and moving the lens towards it when the user releases interactive control. The selection is based on the current view and lens parameters, the graph content and structure, and the navigation history. This approach supports the seamless blending of an automatic tour of the data with interactive lens-based exploration. The approach is tested and discussed in the context of the exploration of multi-layer relightable models.
  • Item
    HexBox: Interactive Box Modeling of Hexahedral Meshes
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Zoccheddu, Francesco; Gobbetti, Enrico; Livesu, Marco; Pietroni, Nico; Cherchi, Gianmarco; Memari, Pooran; Solomon, Justin
    We introduce HexBox, an intuitive modeling method and interactive tool for creating and editing hexahedral meshes. Hexbox brings the major and widely validated surface modeling paradigm of surface box modeling into the world of hex meshing. The main idea is to allow the user to box-model a volumetric mesh by primarily modifying its surface through a set of topological and geometric operations. We support, in particular, local and global subdivision, various instantiations of extrusion, removal, and cloning of elements, the creation of non-conformal or conformal grids, as well as shape modifications through vertex positioning, including manual editing, automatic smoothing, or, eventually, projection on an externally-provided target surface. At the core of the efficient implementation of the method is the coherent maintenance, at all steps, of two parallel data structures: a hexahedral mesh representing the topology and geometry of the currently modeled shape, and a directed acyclic graph that connects operation nodes to the affected mesh hexahedra. Operations are realized by exploiting recent advancements in gridbased meshing, such as mixing of 3-refinement, 2-refinement, and face-refinement, and using templated topological bridges to enforce on-the-fly mesh conformity across pairs of adjacent elements. A direct manipulation user interface lets users control all operations. The effectiveness of our tool, released as open source to the community, is demonstrated by modeling several complex shapes hard to realize with competing tools and techniques.
  • Item
    Exploiting Neighboring Pixels Similarity for Effective SV-BRDF Reconstruction from Sparse MLICs
    (The Eurographics Association, 2021) Pintus, Ruggero; Ahsan, Moonisa; Marton, Fabio; Gobbetti, Enrico; Hulusic, Vedad and Chalmers, Alan
    We present a practical solution to create a relightable model from Multi-light Image Collections (MLICs) acquired using standard acquisition pipelines. The approach targets the difficult but very common situation in which the optical behavior of a flat, but visually and geometrically rich object, such as a painting or a bas relief, is measured using a fixed camera taking few images with a different local illumination. By exploiting information from neighboring pixels through a carefully crafted weighting and regularization scheme, we are able to efficiently infer subtle per-pixel analytical Bidirectional Reflectance Distribution Functions (BRDFs) representations from few per-pixel samples. The method is qualitatively and quantitatively evaluated on both synthetic data and real paintings in the scope of image-based relighting applications.
  • Item
    HistoContours: a Framework for Visual Annotation of Histopathology Whole Slide Images
    (The Eurographics Association, 2022) Al-Thelaya, Khaled; Joad, Faaiz; Gilal, Nauman Ullah; Mifsud, William; Pintore, Giovanni; Gobbetti, Enrico; Agus, Marco; Schneider, Jens; Renata G. Raidou; Björn Sommer; Torsten W. Kuhlen; Michael Krone; Thomas Schultz; Hsiang-Yun Wu
    We present an end-to-end framework for histopathological analysis of whole slide images (WSIs). Our framework uses deep learning-based localization & classification of cell nuclei followed by spatial data aggregation to propagate classes of sparsely distributed nuclei across the entire slide. We use YOLO (''You Only Look Once'') for localization instead of more costly segmentation approaches and show that using HistAuGAN boosts its performance. YOLO finds bounding boxes around nuclei at good accuracy, but the classification accuracy can be improved by other methods. To this end, we extract patches around nuclei from the WSI and consider models from the SqueezeNet, ResNet, and EfficientNet families for classification. Where we do not achieve a clear separation between highest and second-highest softmax activation of the classifier, we use YOLO's output as a secondary vote. The result is a sparse annotation of the WSI, which we turn dense by using kernel density estimation. The result is a full vector of per pixel probabilities for each class of nucleus we consider. This allows us to visualize our results using both color-coding and isocontouring, reducing visual clutter. Our novel nuclei-to-tissue coupling allows histopathologists to work at both the nucleus and the tissue level, a feature appreciated by domain experts in a qualitative user study.