Search Results

Now showing 1 - 8 of 8
  • Item
    Neural Film Grain Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Lesné, Gwilherm; Gousseau, Yann; Ladjal, Saïd; Newson, Alasdair; Bousseau, Adrien; Day, Angela
    Film grain refers to the specific texture of film-acquired images, due to the physical nature of photographic film. Being a visual signature of such images, there is a strong interest in the film-industry for the rendering of these textures for digital images. Some previous works are able to closely mimic the physics of films and produce high quality results, but are computationally expensive. We propose a method based on a lightweight neural network and a texture aware loss function, achieving realistic results with very low complexity, even for large grains and high resolutions. We evaluate our algorithm both quantitatively and qualitatively with respect to previous work.
  • Item
    Mesh Compression with Quantized Neural Displacement Fields
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Pentapati, Sai Karthikey; Phillips, Gregoire; Bovik, Alan C.; Bousseau, Adrien; Day, Angela
    Implicit neural representations (INRs) have been successfully used to compress a variety of 3D surface representations such as Signed Distance Functions (SDFs), voxel grids, and also other forms of structured data such as images, videos, and audio. However, these methods have been limited in their application to unstructured data such as 3D meshes and point clouds. This work presents a simple yet effective method that extends the usage of INRs to compress 3D triangle meshes. Our method encodes a displacement field that refines the coarse version of the 3D mesh surface to be compressed using a small neural network. Once trained, the neural network weights occupy much lower memory than the displacement field or the original surface. We show that our method is capable of preserving intricate geometric textures and demonstrates state-of-the-art performance for compression ratios ranging from 4x to 380x (See Figure 1 for an example).
  • Item
    Neural Two-Level Monte Carlo Real-Time Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Dereviannykh, Mikhail; Klepikov, Dmitrii; Hanika, Johannes; Dachsbacher, Carsten; Bousseau, Adrien; Day, Angela
    We introduce an efficient Two-Level Monte Carlo (subset of Multi-Level Monte Carlo, MLMC) estimator for real-time rendering of scenes with global illumination. Using MLMC we split the shading integral into two parts: the radiance cache integral and the residual error integral that compensates for the bias of the first one. For the first part, we developed the Neural Incident Radiance Cache (NIRC) leveraging the power of tiny neural networks [MRNK21] as a building block, which is trained on the fly. The cache is designed to provide a fast and reasonable approximation of the incident radiance: an evaluation takes 2-25× less compute time than a path tracing sample. This enables us to estimate the radiance cache integral with a high number of samples and by this achieve faster convergence. For the residual error integral, we compute the difference between the NIRC predictions and the unbiased path tracing simulation. Our method makes no assumptions about the geometry, materials, or lighting of a scene and has only few intuitive hyper-parameters. We provide a comprehensive comparative analysis in different experimental scenarios. Since the algorithm is trained in an on-line fashion, it demonstrates significant noise level reduction even for dynamic scenes and can easily be combined with other noise reduction techniques.
  • Item
    2D Neural Fields with Learned Discontinuities
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Liu, Chenxi; Wang, Siqi; Fisher, Matthew; Aneja, Deepali; Jacobson, Alec; Bousseau, Adrien; Day, Angela
    Effective representation of 2D images is fundamental in digital image processing, where traditional methods like raster and vector graphics struggle with sharpness and textural complexity, respectively. Current neural fields offer high fidelity and resolution independence but require predefined meshes with known discontinuities, restricting their utility. We observe that by treating all mesh edges as potential discontinuities, we can represent the discontinuity magnitudes as continuous variables and optimize. We further introduce a novel discontinuous neural field model that jointly approximates the target image and recovers discontinuities. Through systematic evaluations, our neural field outperforms other methods that fit unknown discontinuities with discontinuous representations, exceeding Field of Junction and Boundary Attention by over 11dB in both denoising and super-resolution tasks and achieving 3.5× smaller Chamfer distances than Mumford-Shah-based methods. It also surpasses InstantNGP with improvements of more than 5dB (denoising) and 10dB (super-resolution). Additionally, our approach shows remarkable capability in approximating complex artistic and natural images and cleaning up diffusion-generated depth maps.
  • Item
    Viewpoint Optimization for 3D Graph Drawings
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Wageningen, Simon van; Mchedlidze, Tamara; Telea, Alexandru; Aigner, Wolfgang; Andrienko, Natalia; Wang, Bei
    Graph drawings using a node-link metaphor and straight edges are widely used to represent and understand relational data. While such drawings are typically created in 2D, 3D representations have also gained popularity. When exploring 3D drawings, finding viewpoints that help understanding the graph's structure is crucial. Finding good viewpoints also allows using the 3D drawings to generate good 2D graph drawings. In this work, we tackle the problem of automatically finding high-quality viewpoints for 3D graph drawings. We propose and evaluate strategies based on sampling, gradient descent, and evolutionary-inspired meta-heuristics. Our results show that most strategies quickly converge to high-quality viewpoints within a few dozen function evaluations, with meta-heuristic approaches showing robust performance regardless of the quality metric.
  • Item
    OctFusion: Octree-based Diffusion Models for 3D Shape Generation
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Xiong, Bojun; Wei, Si-Tong; Zheng, Xin-Yang; Cao, Yan-Pei; Lian, Zhouhui; Wang, Peng-Shuai; Attene, Marco; Sellán, Silvia
    Diffusion models have emerged as a popular method for 3D generation. However, it is still challenging for diffusion models to efficiently generate diverse and high-quality 3D shapes. In this paper, we introduce OctFusion, which can generate 3D shapes with arbitrary resolutions in 2.5 seconds on a single Nvidia 4090 GPU, and the extracted meshes are guaranteed to be continuous and manifold. The key components of OctFusion are the octree-based latent representation and the accompanying diffusion models. The representation combines the benefits of both implicit neural representations and explicit spatial octrees and is learned with an octree-based variational autoencoder. The proposed diffusion model is a unified multi-scale U-Net that enables weights and computation sharing across different octree levels and avoids the complexity of widely used cascaded diffusion schemes. We verify the effectiveness of OctFusion on the ShapeNet and Objaverse datasets and achieve state-of-the-art performances on shape generation tasks. We demonstrate that OctFusion is extendable and flexible by generating high-quality color fields for textured mesh generation and high-quality 3D shapes conditioned on text prompts, sketches, or category labels. Our code and pre-trained models are available at https://github.com/octree-nn/octfusion.
  • Item
    Volume Preserving Neural Shape Morphing
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Buonomo, Camille; Digne, Julie; Chaine, Raphaelle; Attene, Marco; Sellán, Silvia
    Shape interpolation is a long standing challenge of geometry processing. As it is ill-posed, shape interpolation methods always work under some hypothesis such as semantic part matching or least displacement. Among such constraints, volume preservation is one of the traditional animation principles. In this paper we propose a method to interpolate between shapes in arbitrary poses favoring volume and topology preservation. To do so, we rely on a level set representation of the shape and its advection by a velocity field through the level set equation, both shape representation and velocity fields being parameterized as neural networks. While divergence free velocity fields ensure volume and topology preservation, they are incompatible with the Eikonal constraint of signed distance functions. This leads us to introduce the notion of adaptive divergence velocity field, a construction compatible with the Eikonal equation with theoretical guarantee on the shape volume preservation. In the non constant volume setting, our method is still helpful to provide a natural morphing, by combining it with a parameterization of the volume change over time. We show experimentally that our method exhibits better volume preservation than other recent approaches, limits topological changes and preserves the structures of shapes better without landmark correspondences.
  • Item
    Im2SurfTex: Surface Texture Generation via Neural Backprojection of Multi-View Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Georgiou, Yiangos; Loizou, Marios; Averkiou, Melinos; Kalogerakis, Evangelos; Attene, Marco; Sellán, Silvia
    We present Im2SurfTex, a method that generates textures for input 3D shapes by learning to aggregate multi-view image outputs produced by 2D image diffusion models onto the shapes' texture space. Unlike existing texture generation techniques that use ad hoc backprojection and averaging schemes to blend multiview images into textures, often resulting in texture seams and artifacts, our approach employs a trained neural module to boost texture coherency. The key ingredient of our module is to leverage neural attention and appropriate positional encodings of image pixels based on their corresponding 3D point positions, normals, and surface-aware coordinates as encoded in geodesic distances within surface patches. These encodings capture texture correlations between neighboring surface points, ensuring better texture continuity. Experimental results show that our module improves texture quality, achieving superior performance in high-resolution texture generation.