12 results
Search Results
Now showing 1 - 10 of 12
Item Optimized Sampling for View Interpolation in Light Fields with Overlapping Patches(The Eurographics Association, 2018) Schedl, David C.; Bimber, Oliver; Diamanti, Olga and Vaxman, AmirOptimized sampling masks that reduce the complexity of camera arrays while preserving the quality of light fields captured at high directional sampling resolution are presented. We propose a new quality metric that is based on sampling-theoretic considerations, a new mask estimation approach that reduces the search space by applying regularity and symmetry constraints, and an enhanced upsampling technique using compressed sensing that supports maximal patch overlap. Our approach out-beats state-of-the-art view-interpolation techniques for light fields and does not rely on depth reconstruction.Item Voxelizing Light-Field Recordings(The Eurographics Association, 2019) Schedl, David; Kurmi, Indrajit; Bimber, Oliver; Fusiello, Andrea and Bimber, OliverLight fields are an emerging image-based technique that support free viewpoint navigation of recorded scenes as demanded in several recent applications (e.g., Virtual Reality). Pure image-based representations, however quickly become inefficient, as a large number of images are required to be captured, stored, and processed. Geometric scene representations require less storage and are more efficient to render. Geometry reconstruction, however, is unreliable and might fail for complex scene parts. Furthermore, view-dependent effects that are preserved with light fields are lost in pure geometry-based techniques. Therefore, we propose a hybrid representation and rendering scheme for recorded dense light fields: we extract isotropic scene regions and represent them by voxels, while the remaining areas are represented as sparse light field. In comparison to dense light fields, storage demands are reduced while visual quality is sustained.Item An Inverse Procedural Modeling Pipeline for Stylized Brush Stroke Rendering(The Eurographics Association, 2024) Li, Hao; Guan, Zhongyue; Wang, Zeyu; Hu, Ruizhen; Charalambous, PanayiotisStylized brush strokes are crucial for digital artists to create drawings that express a desired artistic style. To obtain the ideal brush, artists need to spend much time manually tuning parameters and creating customized brushes, which hinders the completion, redrawing, or modification of digital drawings. This paper proposes an inverse procedural modeling pipeline for predicting brush parameters and rendering stylized strokes given a single sample drawing. Our pipeline involves patch segmentation as a preprocessing step, parameter prediction based on deep learning, and brush generation using a procedural rendering engine. Our method enhances the overall experience of digital drawing recreation by empowering artists with more intuitive control and consistent brush effects.Item Compression and Real-Time Rendering of Inward Looking Spherical Light Fields(The Eurographics Association, 2020) Hajisharif, Saghi; Miandji, Ehsan; Baravadish, Gabriel; Larsson, Per; Unger, Jonas; Wilkie, Alexander and Banterle, FrancescoPhotorealistic rendering is an essential tool for immersive virtual reality. In this regard, the data structure of choice is typically light fields since they contain multidimensional information about the captured environment that can provide motion parallax and view-dependent information such as highlights. There are various ways to acquire light fields depending on the nature of the scene, limitations on the capturing setup, and the application at hand. Our focus in this paper is on full-parallax imaging of large-scale static objects for photorealistic real-time rendering. To this end, we introduce and simulate a new design for capturing inward-looking spherical light fields, and propose a system for efficient compression and real-time rendering of such data using consumer-level hardware suitable for virtual reality applications.Item D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video(The Eurographics Association and John Wiley & Sons Ltd., 2025) Kappel, Moritz; Hahlbohm, Florian; Scholz, Timon; Castillo, Susana; Theobalt, Christian; Eisemann, Martin; Golyanik, Vladislav; Magnor, Marcus; Bousseau, Adrien; Day, AngelaDynamic reconstruction and spatiotemporal novel-view synthesis of non-rigidly deforming scenes recently gained increased attention. While existing work achieves impressive quality and performance on multi-view or teleporting camera setups, most methods fail to efficiently and faithfully recover motion and appearance from casual monocular captures. This paper contributes to the field by introducing a new method for dynamic novel view synthesis from monocular video, such as casual smartphone captures. Our approach represents the scene as a dynamic neural point cloud, an implicit time-conditioned point distribution that encodes local geometry and appearance in separate hash-encoded neural feature grids for static and dynamic regions. By sampling a discrete point cloud from our model, we can efficiently render high-quality novel views using a fast differentiable rasterizer and neural rendering network. Similar to recent work, we leverage advances in neural scene analysis by incorporating data-driven priors like monocular depth estimation and object segmentation to resolve motion and depth ambiguities originating from the monocular captures. In addition to guiding the optimization process, we show that these priors can be exploited to explicitly initialize our scene representation to drastically improve optimization speed and final image quality. As evidenced by our experimental evaluation, our dynamic point cloud model not only enables fast optimization and real-time frame rates for interactive applications, but also achieves competitive image quality on monocular benchmark sequences. Our code and data are available online https://moritzkappel.github.io/projects/dnpc/.Item Does 3D Gaussian Splatting Need Accurate Volumetric Rendering?(The Eurographics Association and John Wiley & Sons Ltd., 2025) Celarek, Adam; Kopanas, Georgios; Drettakis, George; Wimmer, Michael; Kerbl, Bernhard; Bousseau, Adrien; Day, AngelaSince its introduction, 3D Gaussian Splatting (3DGS) has become an important reference method for learning 3D representations of a captured scene, allowing real-time novel-view synthesis with high visual quality and fast training times. Neural Radiance Fields (NeRFs), which preceded 3DGS, are based on a principled ray-marching approach for volumetric rendering. In contrast, while sharing a similar image formation model with NeRF, 3DGS uses a hybrid rendering solution that builds on the strengths of volume rendering and primitive rasterization. A crucial benefit of 3DGS is its performance, achieved through a set of approximations, in many cases with respect to volumetric rendering theory. A naturally arising question is whether replacing these approximations with more principled volumetric rendering solutions can improve the quality of 3DGS. In this paper, we present an in-depth analysis of the various approximations and assumptions used by the original 3DGS solution. We demonstrate that, while more accurate volumetric rendering can help for low numbers of primitives, the power of efficient optimization and the large number of Gaussians allows 3DGS to outperform volumetric rendering despite its approximations.Item Automated Skeleton Transformations on 3D Tree Models Captured from an RGB Video(The Eurographics Association, 2025) Michels, Joren; Moonen, Steven; GÜNEY, ENES; Temsamani, Abdellatif Bey; Michiels, Nick; Ceylan, Duygu; Li, Tzu-MaoA lot of work has been done surrounding the generation of realistically looking 3D models of trees. In most cases, L-systems are used to create variations of specific trees from a set of rules. While achieving good results, these techniques require knowledge of the structure of the tree to construct generative rules. We propose a system that can create variations of trees captured by a single RGB video. Using our method, plausible variations can be created without needing prior knowledge of the specific type of tree. This results in a fast and cost-efficient way to generate trees that resemble their real-life counterparts.Item Cardioid Caustics Generation with Conditional Diffusion Models(The Eurographics Association, 2025) Uss, Wojciech; Kaliński, Wojciech; Kuznetsov, Alexandr; Anand, Harish; Kim, Sungye; Ceylan, Duygu; Li, Tzu-MaoDespite the latest advances in generative neural techniques for producing photorealistic images, they lack generation of multi-bounce, high-frequency lighting effect like caustics. In this work, we tackle the problem of generating cardioid-shaped reflective caustics using diffusion-based generative models. We approach this problem as conditional image generation using a diffusion-based model conditioned with multiple images of geometric, material and illumination information as well as light property. We introduce a framework to fine-tune a pre-trained diffusion model and present results with visually plausible caustics.Item Light the Sprite: Pixel Art Dynamic Light Map Generation(The Eurographics Association, 2025) Nikolov, Ivan; Ceylan, Duygu; Li, Tzu-MaoCorrect lighting and shading are vital for pixel art design. Automating texture generation, such as normal, depth, and occlusion maps, has been a long-standing focus. We extend this by proposing a deep learning model that generates point and directional light maps from RGB pixel art sprites and specified light vectors. Our approach modifies a UNet architecture with CIN layers to incorporate positional and directional information, using ZoeDepth for training depth data. Testing on a popular pixel art dataset shows that the generated light maps closely match those from depth or normal maps, as well as from manual programs. The model effectively relights complex sprites across styles and functions in real time, enhancing artist workflows. The code and dataset are here - https://github.com/IvanNik17/light-sprite.Item VisibleUS: From Cryosectional Images to Real-Time Ultrasound Simulation(The Eurographics Association, 2025) Casanova-Salas, Pablo; Gimeno, Jesus; Blasco-Serra, Arantxa; González-Soler, Eva María; Escamilla-Muñoz, Laura; Valverde-Navarro, Alfonso Amador; Fernández, Marcos; Portalés, Cristina; Günther, Tobias; Montazeri, ZahraThe VisibleUS project aims to generate synthetic ultrasound images from cryosection images, focusing on the musculoskeletal system. Cryosection images provide a highly accurate representation of real tissue structures without artifacts. Using this rich anatomical data, we developed a ray-tracing-based simulation algorithm that models ultrasound wave propagation, scattering, and attenuation. This results in highly realistic ultrasound images that accurately depict fine anatomical details, such as muscle fibers and connective tissues. The simulation tool has various applications, including generating datasets for training neural networks and developing interactive training tools for ultrasound specialists. Its ability to produce realistic ultrasound images in real time enhances medical education and research, improving both the understanding and interpretation of ultrasound imaging.