59 results
Search Results
Now showing 1 - 10 of 59
Item The Use of Photogrammetry in Historic Preservation Curriculum: A Comparative Case Study(The Eurographics Association, 2024) Kepczynska-Walczak, Anetta; Walczak, Bartosz M.; Zarzycki, Andrzej; Sousa Santos, Beatriz; Anderson, EikeComputer graphic techniques have emerged as a key player in digital heritage preservation and its dissemination. Photogrammetry allows for high-fidelity captures and virtual reconstructions of the built environment that can be further ported into virtual reality (VR) and augmented reality (AR) experiences. This paper provides a comparative analysis of historic details and building documentation methods in heritage preservation in the context of architectural education. Specifically, it compares two educational case studies conducted in 10-year intervals documenting the same set of historic artifacts with corresponding state-of-the-art digital technologies. The methodology for this paper is a qualitative comparative analysis of two surveying projects that utilized distinct emerging digital technology while sharing the same study subjects and similar tool-driven curricular framework. The research also incorporates a student survey, offering perspectives on teaching strategies and outcomes within this dynamic educational context. The outcomes demonstrate that the technological (tool-driven) shift impacts the way students interact with the investigated artifacts and the changing role of the interpretative versus analytical skills needed to delineate the work. It also changes what are considered primary and secondary knowledge sources.Item Real-time Seamless Object Space Shading(The Eurographics Association, 2024) Li, Tianyu; Guo, Xiaoxin; Hu, Ruizhen; Charalambous, PanayiotisObject space shading remains a challenging problem in real-time rendering due to runtime overhead and object parameterization limitations. While the recently developed algorithm by Baker et al. [BJ22] enables high-performance real-time object space shading, it still suffers from seam artifacts. In this paper, we introduce an innovative object space shading system leveraging a virtualized per-halfedge texturing schema to obviate excessive shading and preclude texture seam artifacts. Moreover, we implement ReSTIR GI on our system (see Figure 1), removing the necessity of temporally reprojecting shading samples and improving the convergence of areas of disocclusion. Our system yields superior results in terms of both efficiency and visual fidelity.Item DeepIron: Predicting Unwarped Garment Texture from a Single Image(The Eurographics Association, 2024) Kwon, Hyun-Song; Lee, Sung-Hee; Hu, Ruizhen; Charalambous, PanayiotisRealistic reconstruction of 3D clothing from an image has wide applications, such as avatar creation and virtual try-on. This paper presents a novel framework that reconstructs the texture map for 3D garments from a single garment image with pose. Since 3D garments are effectively modeled by stitching 2D garment sewing patterns, our specific goal is to generate a texture image for the sewing patterns. A key component of our framework, the Texture Unwarper, infers the original texture image from the input garment image, which exhibits warping and occlusion of the garment due to the user's body shape and pose. This is effectively achieved by translating between the input and output images by mapping the latent spaces of the two images. By inferring the unwarped original texture of the input garment, our method helps reconstruct 3D garment models that can show high-quality texture images realistically deformed for new poses. We validate the effectiveness of our approach through a comparison with other methods and ablation studies.Item Predictive Modeling of Material Appearance: From the Drawing Board to Interdisciplinary Applications(The Eurographics Association, 2024) Baranoski, Gladimir V. G.; Mania, Katerina; Artusi, AlessandroThis tutorial addresses one of the fundamental and timely topics of computer graphics research, namely the predictive modeling of material appearance. Although this topic is deeply rooted in traditional areas like rendering and natural phenomena simulation, this tutorial is not limited to cover contents connected to these areas. It also closely looks into the scientific methodology employed in the development of predictive models of light and matter interactions. Given the widespread use of this methodology to find modeling solutions for problems within and outside computer graphics, its discussion from a ''behind the scenes'' perspective aims to underscore practical and far-reaching aspects of interdisciplinary research that are often overlooked in related publications. More specifically, this tutorial unveils constraints and pitfalls found in each of the key stages of the model development process, namely data collection, design and evaluation, and brings forward alternatives to tackle them effectively. Furthermore, besides being a central component of realistic image synthesis frameworks, predictive material appearance models have a scope of applications that can be extended far beyond the generation of believable images. For instance, they can be employed to accelerate the hypothesis generation and validation cycles of research across a wide range of fields, from biology and medicine to photonics and remote sensing, among others. These models can also be used to generate comprehensive in silico (computational) datasets to support the translation of knowledge advances in those fields to real-world applications (e.g., the noninvasive screening of medical conditions and the remote detection of environmental hazards). In fact, a number of them are already being used in physical and life sciences, notably to support investigations seeking to strengthen the current understanding about material appearance changes prompted by mechanisms which cannot be fully studied using standard ''wet'' experimental procedures. Accordingly, such interdisciplinary research initiatives are also discussed in this tutorial through selected case studies involving the use of predictive material appearance models to elucidate challenging scientific questions.Item Dense 3D Gaussian Splatting Initialization for Sparse Image Data(The Eurographics Association, 2024) Seibt, Simon; Chang, Thomas Vincent Siu-Lung; von Rymon Lipinski, Bartosz ; Latoschik, Marc Erich; Liu, Lingjie; Averkiou, MelinosThis paper presents advancements in novel-view synthesis with 3D Gaussian Splatting (3DGS) using a dense and accurate SfM point cloud initialization approach. We address the challenge of achieving photorealistic renderings from sparse image data, where basic 3DGS training may result in suboptimal convergence, thus leading to visual artifacts. The proposed method enhances precision and density of initially reconstructed point clouds by refining 3D positions and extrapolating additional points, even for difficult image regions, e.g. with repeating patterns and suboptimal visual coverage. Our contributions focus on improving ''Dense Feature Matching for Structure-from-Motion'' (DFM4SfM) based on a homographic decomposition of the image space to support 3DGS training: First, a grid-based feature detection method is introduced for DFM4SfM to ensure a welldistributed 3D Gaussian initialization uniformly over all depth planes. Second, the SfM feature matching is complemented by a geometric plausibility check, priming the homography estimation and thereby improving the initial placement of 3D Gaussians. Experimental results on the NeRF-LLFF dataset demonstrate that this approach achieves superior qualitative and quantitative results, even for fewer views, and the potential for a significantly accelerated 3DGS training with faster convergence.Item Tackling Diverse Student Backgrounds and Goals while Teaching an Introductory Visual Computing Course at M.Sc. Level(The Eurographics Association, 2024) Silva, Samuel; Sousa Santos, Beatriz; Anderson, EikeVisual Computing entails a set of competences that are core for those pursuing Digital Game Development and has become a much sought competence for professionals in a wide variety of fields. In the particular case presented here, the course serves a diverse audience from Multimedia and Design students without previous knowledge in the field and low programming competences, to students that have a previous BS.c in Game Development and have already covered the basic concepts in a previous course. Additionally, the course is also offered as an elective for Computer Science M.Sc. students. This diverse set of background competences and goals motivated designing an approach to the course where each student can build on previous knowledge and have a say on its personal learning path. This article shares the overall approach, presents and discusses the outcomes, and reflects on future evolutions.Item EUROGRAPHICS 2024: Tutorials Frontmatter(Eurographics Association, 2024) Mania, Katerina; Artusi, Alessandro; Mania, Katerina; Artusi, AlessandroItem Fast Dynamic Facial Wrinkles(The Eurographics Association, 2024) Weiss, Sebastian; Chandran, Prashanth; Zoss, Gaspard; Bradley, Derek; Hu, Ruizhen; Charalambous, PanayiotisWe present a new method to animate the dynamic motion of skin micro wrinkles under facial expression deformation. Since wrinkles are formed as a reservoir of skin for stretching, our model only deforms wrinkles that are perpendicular to the stress axis. Specifically, those wrinkles become wider and shallower when stretched, and deeper and narrower when compressed. In contrast to previous methods that attempted to modify the neutral wrinkle displacement map, our approach is to modify the way wrinkles are constructed in the displacement map. To this end, we build upon a previous synthetic wrinkle generator that allows us to control the width and depth of individual wrinkles when generated on a per-frame basis. Furthermore, since constructing a displacement map per frame of animation is costly, we present a fast approximation approach using pre-computed displacement maps of wrinkles binned by stretch direction, which can be blended interactively in a shader. We compare both our high quality and fast methods with previous techniques for wrinkle animation and demonstrate that our work retains more realistic details.Item From Few to Full: High-Resolution 3D Object Reconstruction from Sparse Views and Unknown Poses(The Eurographics Association, 2024) Yao, Grekou; Mavromatis, Sebastien; Mari, Jean-Luc; Liu, Lingjie; Averkiou, MelinosRecent progress in 3D reconstruction has been driven by generative models, moving from traditional multi-view dependence to single-image diffusion model based techniques. However, these innovative approaches often face challenges with sparse view scenarios, requiring known poses or template shapes, often failing in high-resolution reconstructions. Addressing these issues, we introduce the ''F2F'' (Few to Full) framework, designed for crafting high-resolution 3D models from few views and unknown camera poses, creating fully realistic 3D objects without external constraints. F2F employs a hybrid approach, optimizing both implicit and explicit representations through a unique pipeline involving a pretrained diffusion model for pose estimation, a deformable tetrahedra grid for feature volume construction, and an MLP (neural network) for surface optimization. Our method sets a new standard by ensuring surface geometry, topology, and semantic consistency through differentiable rendering, aiming for a comprehensive solution in 3D reconstruction from sparse views.Item Interactive VPL-based Global Illumination on the GPU Using Adaptive Fuzzy Clustering(The Eurographics Association, 2024) Colom, Arnau; Marques, Ricardo; Santos, Luís Paulo; Liu, Lingjie; Averkiou, MelinosPhysically-based synthesis of high quality imagery results in a significant workload, which makes interactive rendering a very challenging task. Our approach to achieve such interactive frame rates while accurately simulating global illumination phenomena entails developing a Virtual Point Lights (VPL) ray tracer that runs entirely in the GPU. Our performance guarantees arise from clustering both shading points and VPLs and computing visibility only among clusters' representatives. Previous approaches to the same problem resort to K-means clustering, which requires the user to specify the number of clusters; a rather unintuitive requirement. We propose an innovative massively parallel, GPU-efficient, Quality-Threshold clustering algorithm, which requires the user to specify a quality parameter. The algorithm dynamically adjusts the number of clusters depending both on the specified quality threshold and on camera-geometry conditions during execution.