43-Issue 2
Permanent URI for this collection
Browse
Browsing 43-Issue 2 by Issue Date
Now showing 1 - 20 of 54
Results Per Page
Sort Options
Item EUROGRAPHICS 2024: CGF 43-2 Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2024) Bermano, Amit H.; Kalogerakis, Evangelos; Bermano, Amit H.; Kalogerakis, EvangelosItem Raster-to-Graph: Floorplan Recognition via Autoregressive Graph Prediction with an Attention Transformer(The Eurographics Association and John Wiley & Sons Ltd., 2024) Hu, Sizhe; Wu, Wenming; Su, Ruolin; Hou, Wanni; Zheng, Liping; Xu, Benzhu; Bermano, Amit H.; Kalogerakis, EvangelosRecognizing the detailed information embedded in rasterized floorplans is at the research forefront in the community of computer graphics and vision. With the advent of deep neural networks, automatic floorplan recognition has made tremendous breakthroughs. However, co-recognizing both the structures and semantics of floorplans through one neural network remains a significant challenge. In this paper, we introduce a novel framework Raster-to-Graph, which automatically achieves structural and semantic recognition of floorplans.We represent vectorized floorplans as structural graphs embedded with floorplan semantics, thus transforming the floorplan recognition task into a structural graph prediction problem. We design an autoregressive prediction framework using the neural network architecture of the visual attention Transformer, iteratively predicting the wall junctions and wall segments of floorplans in the order of graph traversal. Additionally, we propose a large-scale floorplan dataset containing over 10,000 real-world residential floorplans. Our autoregressive framework can automatically recognize the structures and semantics of floorplans. Extensive experiments demonstrate the effectiveness of our framework, showing significant improvements on all metrics. Qualitative and quantitative evaluations indicate that our framework outperforms existing state-of-the-art methods. Code and dataset for this paper are available at: https://github.com/HSZVIS/Raster-to-Graph.Item Interactive Exploration of Vivid Material Iridescence based on Bragg Mirrors(The Eurographics Association and John Wiley & Sons Ltd., 2024) Fourneau, Gary; Pacanowski, Romain; Barla, Pascal; Bermano, Amit H.; Kalogerakis, EvangelosMany animals, plants or gems exhibit iridescent material appearance in nature. These are due to specific geometric structures at scales comparable to visible wavelengths, yielding so-called structural colors. The most vivid examples are due to photonic crystals, where a same structure is repeated in one, two or three dimensions, augmenting the magnitude and complexity of interference effects. In this paper, we study the appearance of 1D photonic crystals (repetitive pairs of thin films), also called Bragg mirrors. Previous work has considered the effect of multiple thin films using the classical transfer matrix approach, which increases in complexity when the number of repetitions increases. Our first contribution is to introduce a more efficient closedform reflectance formula [Yeh88] for Bragg mirror reflectance to the Graphics community, as well as an approximation that lends itself to efficient spectral integration for RGB rendering. We then explore the appearance of stacks made of rough Bragg layers. Here our contribution is to show that they may lead to a ballistic transmission, significantly speeding up position-free rendering and leading to an efficient single-reflection BRDF model.Item TailorMe: Self-Supervised Learning of an Anatomically Constrained Volumetric Human Shape Model(The Eurographics Association and John Wiley & Sons Ltd., 2024) Wenninger, Stephan; Kemper, Fabian; Schwanecke, Ulrich; Botsch, Mario; Bermano, Amit H.; Kalogerakis, EvangelosHuman shape spaces have been extensively studied, as they are a core element of human shape and pose inference tasks. Classic methods for creating a human shape model register a surface template mesh to a database of 3D scans and use dimensionality reduction techniques, such as Principal Component Analysis, to learn a compact representation. While these shape models enable global shape modifications by correlating anthropometric measurements with the learned subspace, they only provide limited localized shape control. We instead register a volumetric anatomical template, consisting of skeleton bones and soft tissue, to the surface scans of the CAESAR database. We further enlarge our training data to the full Cartesian product of all skeletons and all soft tissues using physically plausible volumetric deformation transfer. This data is then used to learn an anatomically constrained volumetric human shape model in a self-supervised fashion. The resulting TAILORME model enables shape sampling, localized shape manipulation, and fast inference from given surface scans.Item Real-Time Underwater Spectral Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2024) Monzon, Nestor; Gutierrez, Diego; Akkaynak, Derya; Muñoz, Adolfo; Bermano, Amit H.; Kalogerakis, EvangelosThe light field in an underwater environment is characterized by complex multiple scattering interactions and wavelengthdependent attenuation, requiring significant computational resources for the simulation of underwater scenes. We present a novel approach that makes it possible to simulate multi-spectral underwater scenes, in a physically-based manner, in real time. Our key observation is the following: In the vertical direction, the steady decay in irradiance as a function of depth is characterized by the diffuse downwelling attenuation coefficient, which oceanographers routinely measure for different types of waters. We rely on a database of such real-world measurements to obtain an analytical approximation to the Radiative Transfer Equation, allowing for real-time spectral rendering with results comparable to Monte Carlo ground-truth references, in a fraction of the time. We show results simulating underwater appearance for the different optical water types, including volumetric shadows and dynamic, spatially varying lighting near the water surface.Item Computational Smocking through Fabric-Thread Interaction(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhou, Ningfeng; Ren, Jing; Sorkine-Hornung, Olga; Bermano, Amit H.; Kalogerakis, EvangelosWe formalize Italian smocking, an intricate embroidery technique that gathers flat fabric into pleats along meandering lines of stitches, resulting in pleats that fold and gather where the stitching veers. In contrast to English smocking, characterized by colorful stitches decorating uniformly shaped pleats, and Canadian smocking, which uses localized knots to form voluminous pleats, Italian smocking permits the fabric to move freely along the stitched threads following curved paths, resulting in complex and unpredictable pleats with highly diverse, irregular structures, achieved simply by pulling on the threads. We introduce a novel method for digital previewing of Italian smocking results, given the thread stitching path as input. Our method uses a coarse-grained mass-spring system to simulate the interaction between the threads and the fabric. This configuration guides the fine-level fabric deformation through an adaptation of the state-of-the-art simulator, C-IPC [LKJ21]. Our method models the general problem of fabric-thread interaction and can be readily adapted to preview Canadian smocking as well.We compare our results to baseline approaches and physical fabrications to demonstrate the accuracy of our method.Item Real-time Neural Rendering of Dynamic Light Fields(The Eurographics Association and John Wiley & Sons Ltd., 2024) Coomans, Arno; Dominici, Edoardo Alberto; Döring, Christian; Mueller, Joerg H.; Hladky, Jozef; Steinberger, Markus; Bermano, Amit H.; Kalogerakis, EvangelosSynthesising high-quality views of dynamic scenes via path tracing is prohibitively expensive. Although caching offline-quality global illumination in neural networks alleviates this issue, existing neural view synthesis methods are limited to mainly static scenes, have low inference performance or do not integrate well with existing rendering paradigms. We propose a novel neural method that is able to capture a dynamic light field, renders at real-time frame rates at 1920x1080 resolution and integrates seamlessly with Monte Carlo ray tracing frameworks. We demonstrate how a combination of spatial, temporal and a novel surface-space encoding are each effective at capturing different kinds of spatio-temporal signals. Together with a compact fully-fused neural network and architectural improvements, we achieve a twenty-fold increase in network inference speed compared to related methods at equal or better quality. Our approach is suitable for providing offline-quality real-time rendering in a variety of scenarios, such as free-viewpoint video, interactive multi-view rendering, or streaming rendering. Finally, our work can be integrated into other rendering paradigms, e.g., providing a dynamic background for interactive scenarios where the foreground is rendered with traditional methods.Item Neural Garment Dynamics via Manifold-Aware Transformers(The Eurographics Association and John Wiley & Sons Ltd., 2024) Li, Peizhuo; Wang, Tuanfeng Y.; Kesdogan, Timur Levent; Ceylan, Duygu; Sorkine-Hornung, Olga; Bermano, Amit H.; Kalogerakis, EvangelosData driven and learning based solutions for modeling dynamic garments have significantly advanced, especially in the context of digital humans. However, existing approaches often focus on modeling garments with respect to a fixed parametric human body model and are limited to garment geometries that were seen during training. In this work, we take a different approach and model the dynamics of a garment by exploiting its local interactions with the underlying human body. Specifically, as the body moves, we detect local garment-body collisions, which drive the deformation of the garment. At the core of our approach is a mesh-agnostic garment representation and a manifold-aware transformer network design, which together enable our method to generalize to unseen garment and body geometries. We evaluate our approach on a wide variety of garment types and motion sequences and provide competitive qualitative and quantitative results with respect to the state of the art.Item Unfolding via Mesh Approximation using Surface Flows(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zawallich, Lars; Pajarola, Renato; Bermano, Amit H.; Kalogerakis, EvangelosManufacturing a 3D object by folding from a 2D material is typically done in four steps: 3D surface approximation, unfolding the surface into a plane, printing and cutting the outline of the unfolded shape, and refolding it to a 3D object. Usually, these steps are treated separately from each other. In this work we jointly address the first two pipeline steps by allowing the 3D representation to smoothly change while unfolding. This way, we increase the chances to overcome possible ununfoldability issues. To join the two pipeline steps, our work proposes and combines different surface flows with a Tabu Unfolder. We empirically investigate the effects that different surface flows have on the performance as well as on the quality of the unfoldings. Additionally, we demonstrate the ability to solve cases by approximation which comparable algorithms either have to segment or can not solve at all.Item GLS-PIA: n-Dimensional Spherical B-Spline Curve Fitting based on Geodesic Least Square with Adaptive Knot Placement(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhao, Yuming; Wu, Zhongke; Wang, Xingce; Bermano, Amit H.; Kalogerakis, EvangelosDue to the widespread applications of curves on n-dimensional spheres, fitting curves on n-dimensional spheres has received increasing attention in recent years. However, due to the non-Euclidean nature of spheres, curve fitting methods on n-dimensional spheres often struggle to balance fitting accuracy and curve fairness. In this paper, we propose a new fitting framework, GLSPIA, for parameterized point sets on n-dimensional spheres to address the challenge. Meanwhile, we provide the proof of the method. Firstly, we propose a progressive iterative approximation method based on geodesic least squares which can directly optimize the geodesic least squares loss on the n-sphere, improving the accuracy of the fitting. Additionally, we use an error allocation method based on contribution coefficients to ensure the fairness of the fitting curve. Secondly, we propose an adaptive knot placement method based on geodesic difference to estimate a more reasonable distribution of control points in the parameter domain, placing more control points in areas with greater detail. This enables B-spline curves to capture more details with a limited number of control points. Experimental results demonstrate that our framework achieves outstanding performance, especially in handling imbalanced data points. (In this paper, ''sphere'' refers to n-sphere (n = 2) unless otherwise specified.)Item Region-Aware Simplification and Stylization of 3D Line Drawings(The Eurographics Association and John Wiley & Sons Ltd., 2024) Nguyen, Vivien; Fisher, Matthew; Hertzmann, Aaron; Rusinkiewicz, Szymon; Bermano, Amit H.; Kalogerakis, EvangelosShape-conveying line drawings generated from 3D models normally create closed regions in image space. These lines and regions can be stylized to mimic various artistic styles, but for complex objects, the extracted topology is unnecessarily dense, leading to unappealing and unnatural results under stylization. Prior works typically simplify line drawings without considering the regions between them, and lines and regions are stylized separately, then composited together, resulting in unintended inconsistencies. We present a method for joint simplification of lines and regions simultaneously that penalizes large changes to region structure, while keeping regions closed. This feature enables region stylization that remains consistent with the outline curves and underlying 3D geometry.Item Enhancing Spatiotemporal Resampling with a Novel MIS Weight(The Eurographics Association and John Wiley & Sons Ltd., 2024) Pan, Xingyue; Zhang, Jiaxuan; Huang, Jiancong; Liu, Ligang; Bermano, Amit H.; Kalogerakis, EvangelosIn real-time rendering, optimizing the sampling of large-scale candidates is crucial. The spatiotemporal reservoir resampling (ReSTIR) method provides an effective approach for handling large candidate samples, while the Generalized Resampled Importance Sampling (GRIS) theory provides a general framework for resampling algorithms. However, we have observed that when using the generalized multiple importance sampling (MIS) weight in previous work during spatiotemporal reuse, variances gradually amplify in the candidate domain when there are significant differences. To address this issue, we propose a new MIS weight suitable for resampling that blends samples from different sampling domains, ensuring convergence of results as the proportion of non-canonical samples increases. Additionally, we apply this weight to temporal resampling to reduce noise caused by scene changes or jitter. Our method effectively reduces energy loss in the biased version of ReSTIR DI while incurring no additional overhead, and it also suppresses artifacts caused by a high proportion of temporal samples. As a result, our approach leads to lower variance in the sampling results.Item Neural Denoising for Deep-Z Monte Carlo Renderings(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhang, Xianyao; Röthlin, Gerhard; Zhu, Shilin; Aydin, Tunç Ozan; Salehi, Farnood; Gross, Markus; Papas, Marios; Bermano, Amit H.; Kalogerakis, EvangelosWe present a kernel-predicting neural denoising method for path-traced deep-Z images that facilitates their usage in animation and visual effects production. Deep-Z images provide enhanced flexibility during compositing as they contain color, opacity, and other rendered data at multiple depth-resolved bins within each pixel. However, they are subject to noise, and rendering until convergence is prohibitively expensive. The current state of the art in deep-Z denoising yields objectionable artifacts, and current neural denoising methods are incapable of handling the variable number of depth bins in deep-Z images. Our method extends kernel-predicting convolutional neural networks to address the challenges stemming from denoising deep-Z images. We propose a hybrid reconstruction architecture that combines the depth-resolved reconstruction at each bin with the flattened reconstruction at the pixel level. Moreover, we propose depth-aware neighbor indexing of the depth-resolved inputs to the convolution and denoising kernel application operators, which reduces artifacts caused by depth misalignment present in deep-Z images. We evaluate our method on a production-quality deep-Z dataset, demonstrating significant improvements in denoising quality and performance compared to the current state-of-the-art deep-Z denoiser. By addressing the significant challenge of the cost associated with rendering path-traced deep-Z images, we believe that our approach will pave the way for broader adoption of deep-Z workflows in future productions.Item ShellNeRF: Learning a Controllable High-resolution Model of the Eye and Periocular Region(The Eurographics Association and John Wiley & Sons Ltd., 2024) Li, Gengyan; Sarkar, Kripasindhu; Meka, Abhimitra; Buehler, Marcel; Mueller, Franziska; Gotardo, Paulo; Hilliges, Otmar; Beeler, Thabo; Bermano, Amit H.; Kalogerakis, EvangelosEye gaze and expressions are crucial non-verbal signals in face-to-face communication. Visual effects and telepresence demand significant improvements in personalized tracking, animation, and synthesis of the eye region to achieve true immersion. Morphable face models, in combination with coordinate-based neural volumetric representations, show promise in solving the difficult problem of reconstructing intricate geometry (eyelashes) and synthesizing photorealistic appearance variations (wrinkles and specularities) of eye performances. We propose a novel hybrid representation - ShellNeRF - that builds a discretized volume around a 3DMM face mesh using concentric surfaces to model the deformable 'periocular' region. We define a canonical space using the UV layout of the shells that constrains the space of dense correspondence search. Combined with an explicit eyeball mesh for modeling corneal light-transport, our model allows for animatable photorealistic 3D synthesis of the whole eye region. Using multi-view video input, we demonstrate significant improvements over state-of-the-art in expression re-enactment and transfer for high-resolution close-up views of the eye region.Item Physically-based Analytical Erosion for fast Terrain Generation(The Eurographics Association and John Wiley & Sons Ltd., 2024) Tzathas, Petros; Gailleton, Boris; Steer, Philippe; Cordonnier, Guillaume; Bermano, Amit H.; Kalogerakis, EvangelosTerrain generation methods have long been divided between procedural and physically-based. Procedural methods build upon the fast evaluation of a mathematical function but suffer from a lack of geological consistency, while physically-based simulation enforces this consistency at the cost of thousands of iterations unraveling the history of the landscape. In particular, the simulation of the competition between tectonic uplift and fluvial erosion expressed by the stream power law raised recent interest in computer graphics as this allows the generation and control of consistent large-scale mountain ranges, albeit at the cost of a lengthy simulation. In this paper, we explore the analytical solutions of the stream power law and propose a method that is both physically-based and procedural, allowing fast and consistent large-scale terrain generation. In our approach, time is no longer the stopping criterion of an iterative process but acts as the parameter of a mathematical function, a slider that controls the aging of the input terrain from a subtle erosion to the complete replacement by a fully formed mountain range. While analytical solutions have been proposed by the geomorphology community for the 1D case, extending them to a 2D heightmap proves challenging. We propose an efficient implementation of the analytical solutions with a multigrid accelerated iterative process and solutions to incorporate landslides and hillslope processes – two erosion factors that complement the stream power law.Item Polygon Laplacian Made Robust(The Eurographics Association and John Wiley & Sons Ltd., 2024) Bunge, Astrid; Bukenberger, Dennis R.; Wagner, Sven Dominik; Alexa, Marc; Botsch, Mario; Bermano, Amit H.; Kalogerakis, EvangelosDiscrete Laplacians are the basis for various tasks in geometry processing. While the most desirable properties of the discretization invariably lead to the so-called cotangent Laplacian for triangle meshes, applying the same principles to polygon Laplacians leaves degrees of freedom in their construction. From linear finite elements it is well-known how the shape of triangles affects both the error and the operator's condition. We notice that shape quality can be encapsulated as the trace of the Laplacian and suggest that trace minimization is a helpful tool to improve numerical behavior. We apply this observation to the polygon Laplacian constructed from a virtual triangulation [BHKB20] to derive optimal parameters per polygon. Moreover, we devise a smoothing approach for the vertices of a polygon mesh to minimize the trace. We analyze the properties of the optimized discrete operators and show their superiority over generic parameter selection in theory and through various experiments.Item Stylized Face Sketch Extraction via Generative Prior with Limited Data(The Eurographics Association and John Wiley & Sons Ltd., 2024) Yun, Kwan; Seo, Kwanggyoon; Seo, Chang Wook; Yoon, Soyeon; Kim, Seongcheol; Ji, Soohyun; Ashtari, Amirsaman; Noh, Junyong; Bermano, Amit H.; Kalogerakis, EvangelosFacial sketches are both a concise way of showing the identity of a person and a means to express artistic intention. While a few techniques have recently emerged that allow sketches to be extracted in different styles, they typically rely on a large amount of data that is difficult to obtain. Here, we propose StyleSketch, a method for extracting high-resolution stylized sketches from a face image. Using the rich semantics of the deep features from a pretrained StyleGAN, we are able to train a sketch generator with 16 pairs of face and the corresponding sketch images. The sketch generator utilizes part-based losses with two-stage learning for fast convergence during training for high-quality sketch extraction. Through a set of comparisons, we show that StyleSketch outperforms existing state-of-the-art sketch extraction methods and few-shot image adaptation methods for the task of extracting high-resolution abstract face sketches.We further demonstrate the versatility of StyleSketch by extending its use to other domains and explore the possibility of semantic editing. The project page can be found in https://kwanyun.github.io/stylesketch_project.Item Navigating the Manifold of Translucent Appearance(The Eurographics Association and John Wiley & Sons Ltd., 2024) Lanza, Dario; Masia, Belen; Jarabo, Adrian; Bermano, Amit H.; Kalogerakis, EvangelosWe present a perceptually-motivated manifold for translucent appearance, designed for intuitive editing of translucent materials by navigating through the manifold. Classic tools for editing translucent appearance, based on the use of sliders to tune a number of parameters, are challenging for non-expert users: These parameters have a highly non-linear effect on appearance, and exhibit complex interplay and similarity relations between them. Instead, we pose editing as a navigation task in a low-dimensional space of appearances, which abstracts the user from the underlying optical parameters. To achieve this, we build a low-dimensional continuous manifold of translucent appearance that correlates with how humans perceive this type of materials. We first analyze the correlation of different distance metrics in image space with human perception. We select the best-performing metric to build a low-dimensional manifold, which can be used to navigate the space of translucent appearance. To evaluate the validity of our proposed manifold within its intended application scenario, we build an editing interface that leverages the manifold, and relies on image navigation plus a fine-tuning step to edit appearance. We compare our intuitive interface to a traditional, slider-based one in a user study, demonstrating its effectiveness and superior performance when editing translucent objects.Item 3D Reconstruction and Semantic Modeling of Eyelashes(The Eurographics Association and John Wiley & Sons Ltd., 2024) Kerbiriou, Glenn; Avril, Quentin; Marchal, Maud; Bermano, Amit H.; Kalogerakis, EvangelosHigh-fidelity digital human modeling has become crucial in various applications, including gaming, visual effects and virtual reality. Despite the significant impact of eyelashes on facial aesthetics, their reconstruction and modeling have been largely unexplored. In this paper, we introduce the first data-driven generative model of eyelashes based on semantic features. This model is derived from real data by introducing a new 3D eyelash reconstruction method based on multi-view images. The reconstructed data is made available which constitutes the first dataset of 3D eyelashes ever published. Through an innovative extraction process, we determine the features of any set of eyelashes, and present detailed descriptive statistics of human eyelashes shapes. The proposed eyelashes model, which exclusively relies on semantic parameters, effectively captures the appearance of a set of eyelashes. Results show that the proposed model enables interactive, intuitive and realistic eyelashes modeling for non-experts, enriching avatar creation and synthetic data generation pipelines.Item Volcanic Skies: Coupling Explosive Eruptions with Atmospheric Simulation to Create Consistent Skyscapes(The Eurographics Association and John Wiley & Sons Ltd., 2024) Pretorius, Pieter C.; Gain, James; Lastic, Maud; Cordonnier, Guillaume; Chen, Jiong; Rohmer, Damien; Cani, Marie-Paule; Bermano, Amit H.; Kalogerakis, EvangelosExplosive volcanic eruptions rank among the most terrifying natural phenomena, and are thus frequently depicted in films, games, and other media, usually with a bespoke once-off solution. In this paper, we introduce the first general-purpose model for bi-directional interaction between the atmosphere and a volcano plume. In line with recent interactive volcano models, we approximate the plume dynamics with Lagrangian disks and spheres and the atmosphere with sparse layers of 2D Eulerian grids, enabling us to focus on the transfer of physical quantities such as temperature, ash, moisture, and wind velocity between these sub-models. We subsequently generate volumetric animations by noise-based procedural upsampling keyed to aspects of advection, convection, moisture, and ash content to generate a fully-realized volcanic skyscape. Our model captures most of the visually salient features emerging from volcano-sky interaction, such as windswept plumes, enmeshed cap, bell and skirt clouds, shockwave effects, ash rain, and sheathes of lightning visible in the dark.
- «
- 1 (current)
- 2
- 3
- »