44-Issue 6
Permanent URI for this collection
Browse
Browsing 44-Issue 6 by Subject "Computing methodologies→Reconstruction"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item NCD: Normal-Guided Chamfer Distance Loss for Watertight Mesh Reconstruction from Unoriented Point Clouds(The Eurographics Association and John Wiley & Sons Ltd., 2025) Li, Jiaxin; Tan, Jiawei; Ou, Zhilong; Wang, Hongxing; Wimmer, Michael; Alliez, Pierre; Westermann, RüdigerAs a widely used loss function in learnable watertight mesh reconstruction from unoriented point clouds, Chamfer Distance (CD) efficiently quantifies the alignment between the sampled point cloud from the reconstructed mesh and its corresponding input point cloud. Occasionally, to enhance reconstruction fidelity, CD incorporates a normal consistency term, albeit at the cost of efficiency. In this context, normal estimation for unoriented point clouds requires computationally intensive matrix decomposition or specialized pre-trained models, whereas deriving normals for mesh-sampled points can be readily achieved using the cross product of mesh vertices. However, the reconstruction models employing CD and its variants typically rely solely on the spatial coordinates of the points, which omits normal information in favor of efficiency and deployability. To tackle this challenge, we propose a novel loss function for watertight mesh reconstruction from unoriented point clouds, termed Normal-guided Chamfer Distance (NCD). Building upon CD, NCD introduces a normal-steered weighting mechanism based on the angle between the normal at each mesh-sampled point and the vector to its corresponding input point, offering several advantages: (i) it leverages readily available mesh-sampled point normals to weight coordinate-based Euclidean distances, thus extending the capability of CD; (ii) it eliminates the need for normal estimation from input unoriented point clouds; (iii) it incurs a negligible increase in computational complexity compared to CD. We employ NCD as the training loss for point-to-mesh reconstruction with multiple models and initial watertight meshes on benchmark datasets, demonstrating its superiority over state-of-the-art CD variants.Item Self-Calibrating Fisheye Lens Aberrations for Novel View Synthesis(The Eurographics Association and John Wiley & Sons Ltd., 2025) Xiang, Jinhui; Li, Yuqi; Li, Jiabao; Zheng, Wenxing; Fu, Qiang; Wimmer, Michael; Alliez, Pierre; Westermann, RüdigerNeural rendering techniques, such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3D-GS), have led to significant advancements in scene reconstruction and novel view synthesis (NVS). These methods assume the use of an ideal pinhole model, which is free from lens distortion and optical aberrations. However, fisheye lenses introduce unavoidable aberrations due to their wide-angle design and complex manufacturing, leading to multi-view inconsistencies that compromise scene reconstruction quality. In this paper, we propose an end-to-end framework that integrates a standard 3D reconstruction pipeline with our lens aberration model to simultaneously calibrate lens aberrations and reconstruct 3D scenes. By modelling the real imaging process and jointly optimising both tasks, our framework eliminates the impact of aberration-induced inconsistencies on reconstruction. Additionally, we propose a curriculum learning approach that ensures stable optimisation and high-quality reconstruction results, even in the presence of multiple aberrations. To address the limitations of existing benchmarks, we introduce AbeRec, a dataset composed of scenes captured with lenses exhibiting severe aberrations. Extensive experiments on both existing public datasets and our proposed dataset demonstrate that our method not only significantly outperforms previous state-of-the-art methods on fisheye lenses with severe aberrations but also generalises well to scenes captured by non-fisheye lenses. Code and datasets are available at https://github.com/CPREgroup/Calibrating-Fisheye-Lens-Aberration-for-NVS.