Search Results

Now showing 1 - 3 of 3
  • Item
    OctFusion: Octree-based Diffusion Models for 3D Shape Generation
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Xiong, Bojun; Wei, Si-Tong; Zheng, Xin-Yang; Cao, Yan-Pei; Lian, Zhouhui; Wang, Peng-Shuai; Attene, Marco; Sellán, Silvia
    Diffusion models have emerged as a popular method for 3D generation. However, it is still challenging for diffusion models to efficiently generate diverse and high-quality 3D shapes. In this paper, we introduce OctFusion, which can generate 3D shapes with arbitrary resolutions in 2.5 seconds on a single Nvidia 4090 GPU, and the extracted meshes are guaranteed to be continuous and manifold. The key components of OctFusion are the octree-based latent representation and the accompanying diffusion models. The representation combines the benefits of both implicit neural representations and explicit spatial octrees and is learned with an octree-based variational autoencoder. The proposed diffusion model is a unified multi-scale U-Net that enables weights and computation sharing across different octree levels and avoids the complexity of widely used cascaded diffusion schemes. We verify the effectiveness of OctFusion on the ShapeNet and Objaverse datasets and achieve state-of-the-art performances on shape generation tasks. We demonstrate that OctFusion is extendable and flexible by generating high-quality color fields for textured mesh generation and high-quality 3D shapes conditioned on text prompts, sketches, or category labels. Our code and pre-trained models are available at https://github.com/octree-nn/octfusion.
  • Item
    Volume Preserving Neural Shape Morphing
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Buonomo, Camille; Digne, Julie; Chaine, Raphaelle; Attene, Marco; Sellán, Silvia
    Shape interpolation is a long standing challenge of geometry processing. As it is ill-posed, shape interpolation methods always work under some hypothesis such as semantic part matching or least displacement. Among such constraints, volume preservation is one of the traditional animation principles. In this paper we propose a method to interpolate between shapes in arbitrary poses favoring volume and topology preservation. To do so, we rely on a level set representation of the shape and its advection by a velocity field through the level set equation, both shape representation and velocity fields being parameterized as neural networks. While divergence free velocity fields ensure volume and topology preservation, they are incompatible with the Eikonal constraint of signed distance functions. This leads us to introduce the notion of adaptive divergence velocity field, a construction compatible with the Eikonal equation with theoretical guarantee on the shape volume preservation. In the non constant volume setting, our method is still helpful to provide a natural morphing, by combining it with a parameterization of the volume change over time. We show experimentally that our method exhibits better volume preservation than other recent approaches, limits topological changes and preserves the structures of shapes better without landmark correspondences.
  • Item
    Im2SurfTex: Surface Texture Generation via Neural Backprojection of Multi-View Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Georgiou, Yiangos; Loizou, Marios; Averkiou, Melinos; Kalogerakis, Evangelos; Attene, Marco; Sellán, Silvia
    We present Im2SurfTex, a method that generates textures for input 3D shapes by learning to aggregate multi-view image outputs produced by 2D image diffusion models onto the shapes' texture space. Unlike existing texture generation techniques that use ad hoc backprojection and averaging schemes to blend multiview images into textures, often resulting in texture seams and artifacts, our approach employs a trained neural module to boost texture coherency. The key ingredient of our module is to leverage neural attention and appropriate positional encodings of image pixels based on their corresponding 3D point positions, normals, and surface-aware coordinates as encoded in geodesic distances within surface patches. These encodings capture texture correlations between neighboring surface points, ensuring better texture continuity. Experimental results show that our module improves texture quality, achieving superior performance in high-resolution texture generation.