Volume 44 (2025)
Permanent URI for this community
Browse
Browsing Volume 44 (2025) by Subject "3D imaging"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item ClothingTwin: Reconstructing Inner and Outer Layers of Clothing Using 3D Gaussian Splatting(The Eurographics Association and John Wiley & Sons Ltd., 2025) Jung, Munkyung; Lee, Dohae; Lee, In-Kwon; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenWe introduce ClothingTwin, a novel end-to-end framework for reconstructing 3D digital twins of clothing that capture both the outer and inner fabric -without the need for manual mannequin removal. Traditional 2D ''ghost mannequin'' photography techniques remove the mannequin and composite partial inner textures to create images in which the garment appears as if it were worn by a transparent model. However, extending such method to photorealistic 3D Gaussian Splatting (3DGS) is far more challenging. Achieving consistent inner-layer compositing across the large sets of images used for 3DGS optimization quickly becomes impractical if done manually. To address these issues, ClothingTwin introduces three key innovations. First, a specialized image acquisition protocol captures two sets of images for each garment: one worn normally on the mannequin (outer layer exposed) and one worn inside-out (inner layer exposed). This eliminates the need to painstakingly edit out mannequins in thousands of images and provides full coverage of all fabric surfaces. Second, we employ a mesh-guided 3DGS reconstruction for each layer and leverage Non-Rigid Iterative Closest Point (ICP) to align outer and inner point-clouds despite distinct geometries. Third, our enhanced rendering pipeline-featuring mesh-guided back-face culling, back-to-front alpha blending, and recalculated spherical harmonic angles-ensures photorealistic visualization of the combined outer and inner layers without inter-layer artifacts. Experimental evaluations on various garments show that ClothingTwin outperforms conventional 3DGS-based methods, and our ablation study validates the effectiveness of each proposed component.Item Joint Deblurring and 3D Reconstruction for Macrophotography(The Eurographics Association and John Wiley & Sons Ltd., 2025) Zhao, Yifan; Li, Liangchen; Zhou, Yuqi; Wang, Kai; Liang, Yan; Zhang, Juyong; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenMacro lens has the advantages of high resolution and large magnification, and 3D modeling of small and detailed objects can provide richer information. However, defocus blur in macrophotography is a long-standing problem that heavily hinders the clear imaging of the captured objects and high-quality 3D reconstruction of them. Traditional image deblurring methods require a large number of images and annotations, and there is currently no multi-view 3D reconstruction method for macrophotography. In this work, we propose a joint deblurring and 3D reconstruction method for macrophotography. Starting from multi-view blurry images captured, we jointly optimize the clear 3D model of the object and the defocus blur kernel of each pixel. The entire framework adopts a differentiable rendering method to self-supervise the optimization of the 3D model and the defocus blur kernel. Extensive experiments show that from a small number of multi-view images, our proposed method can not only achieve high-quality image deblurring but also recover high-fidelity 3D appearance.Item Multimodal 3D Few-Shot Classification via Gaussian Mixture Discriminant Analysis(The Eurographics Association and John Wiley & Sons Ltd., 2025) Wu, Yiqi; Wu, Huachao; Hu, Ronglei; Chen, Yilin; Zhang, Dejun; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenWhile pre-trained 3D vision-language models are becoming increasingly available, there remains a lack of frameworks that can effectively harness their capabilities for few-shot classification. In this work, we propose PointGMDA, a training-free framework that combines Gaussian Mixture Models (GMMs) with Gaussian Discriminant Analysis (GDA) to perform robust classification using only a few labeled point cloud samples. Our method estimatesGMMparameters per class from support data and computes mixture-weighted prototypes, which are then used in GDA with a shared covariance matrix to construct decision boundaries. This formulation allows us to model intra-class variability more expressively than traditional single-prototype approaches, while maintaining analytical tractability. To incorporate semantic priors, we integrate CLIP-style textual prompts and fuse predictions from geometric and textual modalities through a hybrid scoring strategy. We further introduce PointGMDA-T, a lightweight attention-guided refinement module that learns residuals for fast feature adaptation, improving robustness under distribution shift. Extensive experiments on ModelNet40 and ScanObjectNN demonstrate that PointGMDA outperforms strong baselines across a variety of few-shot settings, with consistent gains under both training-free and fine-tuned conditions. These results highlight the effectiveness and generality of our probabilistic modeling and multimodal adaptation framework. Our code is publicly available at https://github.com/djzgroup/PointGMDA.