44-Issue 7

Permanent URI for this collection

Pacific Graphics 2025 - Symposium Proceedings
Taipei, Taiwan and hosted by National Chengchi University || October 14-17, 2025

(for Conference Papers and Posters see PG 2025 - Conference Papers, Posters, and Demos)
Digital Human
Uncertainty-Aware Adjustment via Learnable Coefficients for Detailed 3D Reconstruction of Clothed Humans from Single Images
Yadan Yang, Yunze Li, Fangli Ying, Aniwat Phaphuangwittayakul, and Riyad Dhuny
EmoDiffGes: Emotion-Aware Co-Speech Holistic Gesture Generation with Progressive Synergistic Diffusion
Xinru Li, Jingzhong Lin, Bohao Zhang, Yuanyuan Qi, Changbo Wang, and Gaoqi He
Feature Disentanglement in GANs for Photorealistic Multi-view Hair Transfer
Jiayi Xu, Zhengyang Wu, Chenming Zhang, Xiaogang Jin, and Yaohua Ji
Text-Guided Diffusion with Spectral Convolution for 3D Human Pose Estimation
Liyuan Shi, Suping Wu, Sheng Yang, Weibin Qiu, Dong Qiang, and Jiarui Zhao
Digital Clothing
Self-Supervised Humidity-Controllable Garment Simulation via Capillary Bridge Modeling
Min Shi, Xinran Wang, Jia-Qi Zhang, Lin Gao, Dengming Zhu, and Hongyan Zhang
Real-Time Per-Garment Virtual Try-On with Temporal Consistency for Loose-Fitting Garments
Zaiqiang Wu, I-Chao Shen, and Takeo Igarashi
ClothingTwin: Reconstructing Inner and Outer Layers of Clothing Using 3D Gaussian Splatting
Munkyung Jung, Dohae Lee, and In-Kwon Lee
Gaussian Splatting
WaterGS: Physically-Based Imaging in Gaussian Splatting for Underwater Scene Reconstruction
Su Qing Wang , Wen Bin Wu, Min Shi, Zhao Xin Li, Qi Wang, and Deng Ming Zhu
LucidFusion: Reconstructing 3D Gaussians with Arbitrary Unposed Images
Hao He, Yixun Liang, Luozhou Wang, Yuanhao Cai, Xinli Xu, Haoxiang Guo, Xiang Wen, and Yingcong Chen
Gaussians on their Way: Wasserstein-Constrained 4D Gaussian Splatting with State-Space Modeling
Junli Deng, Ping Shi , Yihao Luo, and Qipei Li
Gaussian Splatting for Large-Scale Aerial Scene Reconstruction From Ultra-High-Resolution Images
Qiulin Sun, Wei Lai, Yixian Li, and Yanci Zhang
G-SplatGAN: Disentangled 3D Gaussian Generation for Complex Shapes via Multi-Scale Patch Discriminators
Jiaqi Li, Haochuan Dang, Zhi Zhou, Junke Zhu, and Zhangjin Huang
GS-Share: Enabling High-fidelity Map Sharing with Incremental Gaussian Splatting
Xinran Zhang, Hanqi Zhu, Yifan Duan, and Yanyong Zhang
Introducing Unbiased Depth into 2D Gaussian Splatting for High-accuracy Surface Reconstruction
Yixin Yang, Yang Zhou, and Hui Huang
GNF: Gaussian Neural Fields for Multidimensional Signal Representation and Reconstruction
Abelaziz Bouzidi, Hamid Laga, Hazem Wannous, and Ferdous Sohel
Graphic & Artistic designs
LayoutRectifier: An Optimization-based Post-processing for Graphic Design Layout Generation
I-Chao Shen, Ariel Shamir, and Takeo Igarashi
View-Independent Wire Art Modeling via Manifold Fitting
HuiGuang Huang, Dong-Yi Wu, Yulin Wang, Yu Cao, and Tong-Yee Lee
Image Creation & Augmentation
DAATSim: Depth-Aware Atmospheric Turbulence Simulation for Fast Image Rendering
Ripon Kumar Saha, Yufan Zhang, Jinwei Ye, and Suren Jayasuriya
Hybrid Sparse Transformer and Feature Alignment for Efficient Image Completion
L. Chen and Hao Sun
Detecting & Estimating from images and videos
Region-Aware Sparse Attention Network for Lane Detection
Yan Deng and Guoqiang Xiao
BoxFusion: Reconstruction-Free Open-Vocabulary 3D Object Detection via Real-Time Multi-View Box Fusion
Yuqing Lan, Chenyang Zhu, Zhirui Gao, Jiazhao Zhang, Yihan Cao, Renjiao Yi, Yijie Wang, and Kai Xu
FlowCapX: Physics-Grounded Flow Capture with Long-Term Consistency
Ningxiao Tao, Liru Zhang, Xingyu Ni, Mengyu Chu, and Baoquan Chen
Lighting & Rendering
High-Performance Elliptical Cone Tracing
Umut Emre, Aryan Kanak, and Shlomi Steinberg
Geometric Integration for Neural Control Variates
Daniel Meister and Takahiro Harada
TensoIS: A Step Towards Feed-Forward Tensorial Inverse Subsurface Scattering for Perlin Distributed Heterogeneous Media
Ashish Tiwari, Satyam Bhardwaj, Yash Bachwana, Parag Sarvoday Sahu, T. M. Feroz Ali, Bhargava Chintalapati, and Shanmuganathan Raman
LTC-IR: Multiview Edge-Aware Inverse Rendering with Linearly Transformed Cosines
Dabeen Park, Junsuh Park, Jooeun Son, Seungyong Lee, and Joo Ho Lee
Lines, Surfaces & Fields
Projective Displacement Mapping for Ray Traced Editable Surfaces
Rama Hoetzlein
Single-Line Drawing Vectorization
Tanguy Magne and Olga Sorkine-Hornung
Accelerating Signed Distance Functions
Pierre Hubert-Brierre, Eric Guérin, Adrien Peytavie, and Eric Galin
FlatCAD: Fast Curvature Regularization of Neural SDFs for CAD Models
Haotian Yin, Aleksander Plocharski, Michal Jan Wlodarczyk, Mikolaj Kida, and Przemyslaw Musialski
RT-HDIST: Ray-Tracing Core-based Hausdorff Distance Computation
YoungWoo Kim, Jaehong Lee, and Duksu Kim
Creating and Processing Point Clouds
IPFNet: Implicit Primitive Fitting for Robust Point Cloud Segmentation
Shengdi Zhou, Xiaoqiang Zan, and Bin Zhou
FAHNet: Accurate and Robust Normal Estimation for Point Clouds via Frequency-Aware Hierarchical Geometry
Chengwei Wang, Wenming Wu, Yue Fei, Gaofeng Zhang, and Liping Zheng
PARC: A Two-Stage Multi-Modal Framework for Point Cloud Completion
Yujiao Cai and Yuhao Su
Multimodal 3D Few-Shot Classification via Gaussian Mixture Discriminant Analysis
Yiqi Wu, Huachao Wu, Ronglei Hu, Yilin Chen, and Dejun Zhang
Preconditioned Deformation Grids
Julian Kaltheuner, Alexander Oebel, Hannah Droege, Patrick Stotko, and Reinhard Klein
Reconstruction from Close-up Image
Joint Deblurring and 3D Reconstruction for Macrophotography
Yifan Zhao, Liangchen Li, Yuqi Zhou, Kai Wang, Yan Liang, and Juyong Zhang
Automatic Reconstruction of Woven Cloth from a Single Close-up Image
Chenghao Wu, Apoorv Khattar, Junqiu Zhu, Steve Pettifer, Lingqi Yan, and Zahra Montazeri
Shape Extraction
Swept Volume Computation with Enhanced Geometric Detail Preservation
Pengfei Wang, Yuexin Yang, Shuangmin Chen, Shiqing Xin, Changhe Tu, and Wenping Wang
PaMO: Parallel Mesh Optimization for Intersection-Free Low-Poly Modeling on the GPU
Seonghun Oh, Xiaodi Yuan, Xinyue Wei, Ruoxi Shi, Fanbo Xiang, Minghua Liu, and Hao Su
Computational Design of Body-Supporting Assemblies
Yixuan He, Rulin Chen, Bailin Deng, and Peng Song
Synthetizing 3D shapes
Procedural Multiscale Geometry Modeling using Implicit Functions
Bojja Venu, Adam Bosak, and Juan Raúl Padrón-Griffe
MF-SDF: Neural Implicit Surface Reconstruction using Mixed Incident Illumination and Fourier Feature Optimization
Xueyang Zhou, Xukun Shen, and Yong Hu
A Solver-Aided Hierarchical Language for LLM-Driven CAD Design
Ben T. Jones, Zihan Zhang, Felix Hähnlein, Wojciech Matusik, Maaz Ahmad, Vladimir Kim, and Adriana Schulz
TopoGen: Topology-Aware 3D Generation with Persistence Points
Jiangbei Hu, Ben Fei, Baixin Xu, Fei Hou, Shengfa Wang, Na Lei, Weidong Yang, Chen Qian, and Ying He
Stylization
StyleMM: Stylized 3D Morphable Face Model via Text Driven Aligned Image Translation
Seungmi Lee, Kwan Yun, and Junyong Noh
SPG: Style-Prompting Guidance for Style-Specific Content Creation
Qian Liang, Zichong Chen, Yang Zhou, and Hui Huang
Using Saliency for Semantic Image Abstractions in Robotic Painting
Michael Stroh, Patrick Paetzold, Daniel Berio, Rebecca Kehlbeck, Frederic Fol Leymarie, Oliver Deussen, and Noura Faraj

BibTeX (44-Issue 7)
                
@article{
10.1111:cgf.70260,
journal = {Computer Graphics Forum}, title = {{
Pacific Graphics 2025 - CGF 44-7: Frontmatter}},
author = {
Christie, Marc
and
Pietroni, Nico
and
Wang, Yu-Shuen
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70260}
}
                
@article{
10.1111:cgf.70227,
journal = {Computer Graphics Forum}, title = {{
LucidFusion: Reconstructing 3D Gaussians with Arbitrary Unposed Images}},
author = {
He, Hao
and
Liang, Yixun
and
Wang, Luozhou
and
Cai, Yuanhao
and
Xu, Xinli
and
Guo, Haoxiang
and
Wen, Xiang
and
Chen, Yingcong
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70227}
}
                
@article{
10.1111:cgf.70228,
journal = {Computer Graphics Forum}, title = {{
Single-Line Drawing Vectorization}},
author = {
Magne, Tanguy
and
Sorkine-Hornung, Olga
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70228}
}
                
@article{
10.1111:cgf.70229,
journal = {Computer Graphics Forum}, title = {{
RT-HDIST: Ray-Tracing Core-based Hausdorff Distance Computation}},
author = {
Kim, YoungWoo
and
Lee, Jaehong
and
Kim, Duksu
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70229}
}
                
@article{
10.1111:cgf.70230,
journal = {Computer Graphics Forum}, title = {{
High-Performance Elliptical Cone Tracing}},
author = {
Emre, Umut
and
Kanak, Aryan
and
Steinberg, Shlomi
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70230}
}
                
@article{
10.1111:cgf.70231,
journal = {Computer Graphics Forum}, title = {{
IPFNet: Implicit Primitive Fitting for Robust Point Cloud Segmentation}},
author = {
Zhou, Shengdi
and
Zan, Xiaoqiang
and
Zhou, Bin
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70231}
}
                
@article{
10.1111:cgf.70232,
journal = {Computer Graphics Forum}, title = {{
GNF: Gaussian Neural Fields for Multidimensional Signal Representation and Reconstruction}},
author = {
Bouzidi, Abelaziz
and
Laga, Hamid
and
Wannous, Hazem
and
Sohel, Ferdous
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70232}
}
                
@article{
10.1111:cgf.70233,
journal = {Computer Graphics Forum}, title = {{
Procedural Multiscale Geometry Modeling using Implicit Functions}},
author = {
Venu, Bojja
and
Bosak, Adam
and
Padrón-Griffe, Juan Raúl
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70233}
}
                
@article{
10.1111:cgf.70234,
journal = {Computer Graphics Forum}, title = {{
StyleMM: Stylized 3D Morphable Face Model via Text Driven Aligned Image Translation}},
author = {
Lee, Seungmi
and
Yun, Kwan
and
Noh, Junyong
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70234}
}
                
@article{
10.1111:cgf.70235,
journal = {Computer Graphics Forum}, title = {{
Projective Displacement Mapping for Ray Traced Editable Surfaces}},
author = {
Hoetzlein, Rama
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70235}
}
                
@article{
10.1111:cgf.70236,
journal = {Computer Graphics Forum}, title = {{
Self-Supervised Humidity-Controllable Garment Simulation via Capillary Bridge Modeling}},
author = {
Shi, Min
and
Wang, Xinran
and
Zhang, Jia-Qi
and
Gao, Lin
and
Zhu, Dengming
and
Zhang, Hongyan
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70236}
}
                
@article{
10.1111:cgf.70237,
journal = {Computer Graphics Forum}, title = {{
Computational Design of Body-Supporting Assemblies}},
author = {
He, Yixuan
and
Chen, Rulin
and
Deng, Bailin
and
Song, Peng
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70237}
}
                
@article{
10.1111:cgf.70238,
journal = {Computer Graphics Forum}, title = {{
Swept Volume Computation with Enhanced Geometric Detail Preservation}},
author = {
Wang, Pengfei
and
Yang, Yuexin
and
Chen, Shuangmin
and
Xin, Shiqing
and
Tu, Changhe
and
Wang, Wenping
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70238}
}
                
@article{
10.1111:cgf.70239,
journal = {Computer Graphics Forum}, title = {{
Uncertainty-Aware Adjustment via Learnable Coefficients for Detailed 3D Reconstruction of Clothed Humans from Single Images}},
author = {
Yang, Yadan
and
Li, Yunze
and
Ying, Fangli
and
Phaphuangwittayakul, Aniwat
and
Dhuny, Riyad
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70239}
}
                
@article{
10.1111:cgf.70240,
journal = {Computer Graphics Forum}, title = {{
ClothingTwin: Reconstructing Inner and Outer Layers of Clothing Using 3D Gaussian Splatting}},
author = {
Jung, Munkyung
and
Lee, Dohae
and
Lee, In-Kwon
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70240}
}
                
@article{
10.1111:cgf.70241,
journal = {Computer Graphics Forum}, title = {{
DAATSim: Depth-Aware Atmospheric Turbulence Simulation for Fast Image Rendering}},
author = {
Saha, Ripon Kumar
and
Zhang, Yufan
and
Ye, Jinwei
and
Jayasuriya, Suren
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70241}
}
                
@article{
10.1111:cgf.70242,
journal = {Computer Graphics Forum}, title = {{
TensoIS: A Step Towards Feed-Forward Tensorial Inverse Subsurface Scattering for Perlin Distributed Heterogeneous Media}},
author = {
Tiwari, Ashish
and
Bhardwaj, Satyam
and
Bachwana, Yash
and
Sahu, Parag Sarvoday
and
Ali, T. M. Feroz
and
Chintalapati, Bhargava
and
Raman, Shanmuganathan
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70242}
}
                
@article{
10.1111:cgf.70243,
journal = {Computer Graphics Forum}, title = {{
Automatic Reconstruction of Woven Cloth from a Single Close-up Image}},
author = {
Wu, Chenghao
and
Khattar, Apoorv
and
Zhu, Junqiu
and
Pettifer, Steve
and
Yan, Lingqi
and
Montazeri, Zahra
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70243}
}
                
@article{
10.1111:cgf.70244,
journal = {Computer Graphics Forum}, title = {{
MF-SDF: Neural Implicit Surface Reconstruction using Mixed Incident Illumination and Fourier Feature Optimization}},
author = {
Zhou, Xueyang
and
Shen, Xukun
and
Hu, Yong
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70244}
}
                
@article{
10.1111:cgf.70245,
journal = {Computer Graphics Forum}, title = {{
Feature Disentanglement in GANs for Photorealistic Multi-view Hair Transfer}},
author = {
Xu, Jiayi
and
Wu, Zhengyang
and
Zhang, Chenming
and
Jin, Xiaogang
and
Ji, Yaohua
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70245}
}
                
@article{
10.1111:cgf.70246,
journal = {Computer Graphics Forum}, title = {{
Region-Aware Sparse Attention Network for Lane Detection}},
author = {
Deng, Yan
and
Xiao, Guoqiang
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70246}
}
                
@article{
10.1111:cgf.70247,
journal = {Computer Graphics Forum}, title = {{
View-Independent Wire Art Modeling via Manifold Fitting}},
author = {
Huang, HuiGuang
and
Wu, Dong-Yi
and
Wang, Yulin
and
Cao, Yu
and
Lee, Tong-Yee
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70247}
}
                
@article{
10.1111:cgf.70248,
journal = {Computer Graphics Forum}, title = {{
GS-Share: Enabling High-fidelity Map Sharing with Incremental Gaussian Splatting}},
author = {
Zhang, Xinran
and
Zhu, Hanqi
and
Duan, Yifan
and
Zhang, Yanyong
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70248}
}
                
@article{
10.1111:cgf.70249,
journal = {Computer Graphics Forum}, title = {{
FlatCAD: Fast Curvature Regularization of Neural SDFs for CAD Models}},
author = {
Yin, Haotian
and
Plocharski, Aleksander
and
Wlodarczyk, Michal Jan
and
Kida, Mikolaj
and
Musialski, Przemyslaw
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70249}
}
                
@article{
10.1111:cgf.70250,
journal = {Computer Graphics Forum}, title = {{
A Solver-Aided Hierarchical Language for LLM-Driven CAD Design}},
author = {
Jones, Ben T.
and
Zhang, Zihan
and
Hähnlein, Felix
and
Matusik, Wojciech
and
Ahmad, Maaz
and
Kim, Vladimir
and
Schulz, Adriana
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70250}
}
                
@article{
10.1111:cgf.70251,
journal = {Computer Graphics Forum}, title = {{
SPG: Style-Prompting Guidance for Style-Specific Content Creation}},
author = {
Liang, Qian
and
Chen, Zichong
and
Zhou, Yang
and
Huang, Hui
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70251}
}
                
@article{
10.1111:cgf.70252,
journal = {Computer Graphics Forum}, title = {{
Introducing Unbiased Depth into 2D Gaussian Splatting for High-accuracy Surface Reconstruction}},
author = {
Yang, Yixin
and
Zhou, Yang
and
Huang, Hui
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70252}
}
                
@article{
10.1111:cgf.70253,
journal = {Computer Graphics Forum}, title = {{
Joint Deblurring and 3D Reconstruction for Macrophotography}},
author = {
Zhao, Yifan
and
Li, Liangchen
and
Zhou, Yuqi
and
Wang, Kai
and
Liang, Yan
and
Zhang, Juyong
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70253}
}
                
@article{
10.1111:cgf.70254,
journal = {Computer Graphics Forum}, title = {{
BoxFusion: Reconstruction-Free Open-Vocabulary 3D Object Detection via Real-Time Multi-View Box Fusion}},
author = {
Lan, Yuqing
and
Zhu, Chenyang
and
Gao, Zhirui
and
Zhang, Jiazhao
and
Cao, Yihan
and
Yi, Renjiao
and
Wang, Yijie
and
Xu, Kai
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70254}
}
                
@article{
10.1111:cgf.70255,
journal = {Computer Graphics Forum}, title = {{
Hybrid Sparse Transformer and Feature Alignment for Efficient Image Completion}},
author = {
Chen, L.
and
Sun, Hao
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70255}
}
                
@article{
10.1111:cgf.70256,
journal = {Computer Graphics Forum}, title = {{
G-SplatGAN: Disentangled 3D Gaussian Generation for Complex Shapes via Multi-Scale Patch Discriminators}},
author = {
Li, Jiaqi
and
Dang, Haochuan
and
Zhou, Zhi
and
Zhu, Junke
and
Huang, Zhangjin
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70256}
}
                
@article{
10.1111:cgf.70257,
journal = {Computer Graphics Forum}, title = {{
TopoGen: Topology-Aware 3D Generation with Persistence Points}},
author = {
Hu, Jiangbei
and
Fei, Ben
and
Xu, Baixin
and
Hou, Fei
and
Wang, Shengfa
and
Lei, Na
and
Yang, Weidong
and
Qian, Chen
and
He, Ying
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70257}
}
                
@article{
10.1111:cgf.70258,
journal = {Computer Graphics Forum}, title = {{
Accelerating Signed Distance Functions}},
author = {
Hubert-Brierre, Pierre
and
Guérin, Eric
and
Peytavie, Adrien
and
Galin, Eric
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70258}
}
                
@article{
10.1111:cgf.70259,
journal = {Computer Graphics Forum}, title = {{
Using Saliency for Semantic Image Abstractions in Robotic Painting}},
author = {
Stroh, Michael
and
Paetzold, Patrick
and
Berio, Daniel
and
Kehlbeck, Rebecca
and
Leymarie, Frederic Fol
and
Deussen, Oliver
and
Faraj, Noura
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70259}
}
                
@article{
10.1111:cgf.70261,
journal = {Computer Graphics Forum}, title = {{
EmoDiffGes: Emotion-Aware Co-Speech Holistic Gesture Generation with Progressive Synergistic Diffusion}},
author = {
Li, Xinru
and
Lin, Jingzhong
and
Zhang, Bohao
and
Qi, Yuanyuan
and
Wang, Changbo
and
He, Gaoqi
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70261}
}
                
@article{
10.1111:cgf.70262,
journal = {Computer Graphics Forum}, title = {{
LTC-IR: Multiview Edge-Aware Inverse Rendering with Linearly Transformed Cosines}},
author = {
Park, Dabeen
and
Park, Junsuh
and
Son, Jooeun
and
Lee, Seungyong
and
Lee, Joo Ho
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70262}
}
                
@article{
10.1111:cgf.70263,
journal = {Computer Graphics Forum}, title = {{
Text-Guided Diffusion with Spectral Convolution for 3D Human Pose Estimation}},
author = {
Shi, Liyuan
and
Wu, Suping
and
Yang, Sheng
and
Qiu, Weibin
and
Qiang, Dong
and
Zhao, Jiarui
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70263}
}
                
@article{
10.1111:cgf.70264,
journal = {Computer Graphics Forum}, title = {{
FAHNet: Accurate and Robust Normal Estimation for Point Clouds via Frequency-Aware Hierarchical Geometry}},
author = {
Wang, Chengwei
and
Wu, Wenming
and
Fei, Yue
and
Zhang, Gaofeng
and
Zheng, Liping
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70264}
}
                
@article{
10.1111:cgf.70265,
journal = {Computer Graphics Forum}, title = {{
Gaussian Splatting for Large-Scale Aerial Scene Reconstruction From Ultra-High-Resolution Images}},
author = {
Sun, Qiulin
and
Lai, Wei
and
Li, Yixian
and
Zhang, Yanci
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70265}
}
                
@article{
10.1111:cgf.70266,
journal = {Computer Graphics Forum}, title = {{
PARC: A Two-Stage Multi-Modal Framework for Point Cloud Completion}},
author = {
Cai, Yujiao
and
Su, Yuhao
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70266}
}
                
@article{
10.1111:cgf.70267,
journal = {Computer Graphics Forum}, title = {{
PaMO: Parallel Mesh Optimization for Intersection-Free Low-Poly Modeling on the GPU}},
author = {
Oh, Seonghun
and
Yuan, Xiaodi
and
Wei, Xinyue
and
Shi, Ruoxi
and
Xiang, Fanbo
and
Liu, Minghua
and
Su, Hao
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70267}
}
                
@article{
10.1111:cgf.70268,
journal = {Computer Graphics Forum}, title = {{
Multimodal 3D Few-Shot Classification via Gaussian Mixture Discriminant Analysis}},
author = {
Wu, Yiqi
and
Wu, Huachao
and
Hu, Ronglei
and
Chen, Yilin
and
Zhang, Dejun
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70268}
}
                
@article{
10.1111:cgf.70269,
journal = {Computer Graphics Forum}, title = {{
Preconditioned Deformation Grids}},
author = {
Kaltheuner, Julian
and
Oebel, Alexander
and
Droege, Hannah
and
Stotko, Patrick
and
Klein, Reinhard
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70269}
}
                
@article{
10.1111:cgf.70270,
journal = {Computer Graphics Forum}, title = {{
WaterGS: Physically-Based Imaging in Gaussian Splatting for Underwater Scene Reconstruction}},
author = {
, Su Qing Wang
and
Wu, Wen Bin
and
Shi, Min
and
Li, Zhao Xin
and
Wang, Qi
and
Zhu, Deng Ming
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70270}
}
                
@article{
10.1111:cgf.70271,
journal = {Computer Graphics Forum}, title = {{
Gaussians on their Way: Wasserstein-Constrained 4D Gaussian Splatting with State-Space Modeling}},
author = {
Deng, Junli
and
, Ping Shi
and
Luo, Yihao
and
Li, Qipei
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70271}
}
                
@article{
10.1111:cgf.70272,
journal = {Computer Graphics Forum}, title = {{
Real-Time Per-Garment Virtual Try-On with Temporal Consistency for Loose-Fitting Garments}},
author = {
Wu, Zaiqiang
and
Shen, I-Chao
and
Igarashi, Takeo
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70272}
}
                
@article{
10.1111:cgf.70273,
journal = {Computer Graphics Forum}, title = {{
LayoutRectifier: An Optimization-based Post-processing for Graphic Design Layout Generation}},
author = {
Shen, I-Chao
and
Shamir, Ariel
and
Igarashi, Takeo
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70273}
}
                
@article{
10.1111:cgf.70274,
journal = {Computer Graphics Forum}, title = {{
FlowCapX: Physics-Grounded Flow Capture with Long-Term Consistency}},
author = {
Tao, Ningxiao
and
Zhang, Liru
and
Ni, Xingyu
and
Chu, Mengyu
and
Chen, Baoquan
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70274}
}
                
@article{
10.1111:cgf.70275,
journal = {Computer Graphics Forum}, title = {{
Geometric Integration for Neural Control Variates}},
author = {
Meister, Daniel
and
Harada, Takahiro
}, year = {
2025},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.70275}
}

Browse

Recent Submissions

Now showing 1 - 49 of 49
  • Item
    Pacific Graphics 2025 - CGF 44-7: Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
  • Item
    LucidFusion: Reconstructing 3D Gaussians with Arbitrary Unposed Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) He, Hao; Liang, Yixun; Wang, Luozhou; Cai, Yuanhao; Xu, Xinli; Guo, Haoxiang; Wen, Xiang; Chen, Yingcong; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Recent large reconstruction models have made notable progress in generating high-quality 3D objects from single images. However, current reconstruction methods often rely on explicit camera pose estimation or fixed viewpoints, restricting their flexibility and practical applicability. We reformulate 3D reconstruction as image-to-image translation and introduce the Relative Coordinate Map (RCM), which aligns multiple unposed images to a ''main'' view without pose estimation. While RCM simplifies the process, its lack of global 3D supervision can yield noisy outputs. To address this, we propose Relative Coordinate Gaussians (RCG) as an extension to RCM, which treats each pixel's coordinates as a Gaussian center and employs differentiable rasterization for consistent geometry and pose recovery. Our LucidFusion framework handles an arbitrary number of unposed inputs, producing robust 3D reconstructions within seconds and paving the way for more flexible, pose-free 3D pipelines.
  • Item
    Single-Line Drawing Vectorization
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Magne, Tanguy; Sorkine-Hornung, Olga; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Vectorizing line drawings is a repetitive, yet necessary task that professional creatives must perform to obtain an easily editable and scalable digital representation of a raster sketch. State-of-the-art automatic methods in this domain can create series of curves that closely fit the appearance of the drawing. However, they often neglect the line parameterization. Thus, their vector representation cannot be edited naturally by following the drawing order. We present a novel method for single-line drawing vectorization that addresses this issue. Single-line drawings consist of a single stroke, where the line can intersect itself multiple times, making the drawing order non-trivial to recover. Our method fits a single parametric curve, represented as a Bézier spline, to approximate the stroke in the input raster image. To this end, we produce a graph representation of the input and employ geometric priors and a specially trained neural network to correctly capture and classify curve intersections and their traversal configuration. Our method is easily extended to drawings containing multiple strokes while preserving their integrity and order.We compare our vectorized results with the work of several artists, showing that our stroke order is similar to the one artists employ naturally. Our vectorization method achieves state-of-the-art results in terms of similarity with the original drawing and quality of the vectorization on a benchmark of single-line drawings. Our method's results can be refined interactively, making it easy to integrate into professional workflows. Our code and results are available at https://github.com/tanguymagne/SLD-Vectorization.
  • Item
    RT-HDIST: Ray-Tracing Core-based Hausdorff Distance Computation
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Kim, YoungWoo; Lee, Jaehong; Kim, Duksu; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    The Hausdorff distance is a fundamental metric with widespread applications across various fields. However, its computation remains computationally expensive, especially for large-scale datasets. This work targets exact point-to-point Hausdorff distance on point sets. In this work, we present RT-HDIST, the first Hausdorff distance algorithm accelerated by ray-tracing cores (RT-cores). By reformulating the Hausdorff distance problem as a series of nearest-neighbor searches and introducing a novel quantized voxel-index space, RT-HDIST achieves significant reductions in computational overhead while maintaining exact results. Extensive benchmarks demonstrate up to a two-order-of-magnitude speedup over prior state-of-the-art methods, underscoring RT-HDIST's potential for real-time and large-scale applications.
  • Item
    High-Performance Elliptical Cone Tracing
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Emre, Umut; Kanak, Aryan; Steinberg, Shlomi; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    In this work, we discuss elliptical cone traversal in scenes that employ typical triangular meshes. We derive accurate and numerically-stable intersection tests for an elliptical conic frustum with an AABB, plane, edge and a triangle, and analyze the performance of elliptical cone tracing when using different acceleration data structures: SAH-based K-d trees, BVHs as well as a modern 8-wide BVH variant adapted for cone tracing, and compare with ray tracing. In addition, several cone traversal algorithms are analyzed, and we develop novel heuristics and optimizations that give better performance than previous traversal approaches. The results highlight the difference in performance characteristics between rays and cones, and serve to guide the design of acceleration data structures for applications that employ cone tracing.
  • Item
    IPFNet: Implicit Primitive Fitting for Robust Point Cloud Segmentation
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Zhou, Shengdi; Zan, Xiaoqiang; Zhou, Bin; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    The segmentation and fitting of geometric primitives from point clouds is a widely adopted approach for modelling the underlying geometric structure of objects in reverse engineering and numerous graphics applications. Existing methods either overlook the role of geometric information in assisting segmentation or incorporate reconstruction losses without leveraging modern neural implicit field representations, leading to limited robustness against noise and weak expressive power in reconstruction. We propose a point cloud segmentation and fitting framework based on neural implicit representations, fully leveraging neural implicit fields' expressive power and robustness. The key idea is the unification of geometric representation within a neural implicit field framework, enabling seamless integration of geometric loss for improved performance. In contrast to previous approaches that focus solely on clustering in the feature embedding space, our method enhances instance segmentation through semanticaware point embeddings and simultaneously improves semantic predictions via instance-level feature fusion. Furthermore, we incorporate 3D-specific cues such as spatial dimensions and geometric connectivity, which are uniquely informative in the 3D domain. Extensive experiments and comparisons against previous methods demonstrate our robustness and superiority.
  • Item
    GNF: Gaussian Neural Fields for Multidimensional Signal Representation and Reconstruction
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Bouzidi, Abelaziz; Laga, Hamid; Wannous, Hazem; Sohel, Ferdous; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Neural fields have emerged as a powerful framework for representing continuous multidimensional signals such as images and videos, 3D and 4D objects and scenes, and radiance fields. While efficient, achieving high-quality representation requires the use of wide and deep neural networks. These, however, are slow to train and evaluate. Although several acceleration techniques have been proposed, they either trade memory for faster training and/or inference, rely on thousands of fitted primitives with considerable optimization time, or compromise the smooth, continuous nature of neural fields. In this paper, we introduce Gaussian Neural Fields (GNF), a novel compact neural decoder that maps learned feature grids into continuous non-linear signals, such as RGB images, Signed Distance Functions (SDFs), and radiance fields, using a single compact layer of Gaussian kernels defined in a high-dimensional feature space. Our key observation is that neurons in traditional MLPs perform simple computations, usually a dot product followed by an activation function, necessitating wide and deep MLPs or high-resolution feature grids to model complex functions. In this paper, we show that replacing MLP-based decoders with Gaussian kernels whose centers are learned features yields highly accurate representations of 2D (RGB), 3D (geometry), and 5D (radiance fields) signals with just a single layer of such kernels. This representation is highly parallelizable, operates on low-resolution grids, and trains in under 15 seconds for 3D geometry and under 11 minutes for view synthesis. GNF matches the accuracy of deep MLP-based decoders with far fewer parameters and significantly higher inference throughput. The source code is publicly available at https://grbfnet.github.io/.
  • Item
    Procedural Multiscale Geometry Modeling using Implicit Functions
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Venu, Bojja; Bosak, Adam; Padrón-Griffe, Juan Raúl; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Materials exhibit geometric structures across mesoscopic to microscopic scales, influencing macroscale properties such as appearance, mechanical strength, and thermal behavior. Capturing and modeling these multiscale structures is challenging but essential for computer graphics, engineering, and materials science. We present a framework inspired by hypertexture methods, using implicit surfaces and sphere tracing to synthesize multiscale structures on the fly without precomputation. This framework models volumetric materials with particulate, fibrous, porous, and laminar structures, allowing control over size, shape, density, distribution, and orientation. We enhance structural diversity by superimposing implicit periodic functions while improving computational efficiency. The framework also supports spatially varying particulate media, particle agglomeration, and piling on convex and concave structures, such as rock formations (mesoscale), without explicit simulation. We demonstrate its potential in the appearance modeling of volumetric materials and investigate how spatially varying properties affect the perceived macroscale appearance. As a proof of concept, we show that microstructures created by our framework can be reconstructed from image and distance values defined by implicit surfaces, using both first-order and gradient-free optimization methods.
  • Item
    StyleMM: Stylized 3D Morphable Face Model via Text Driven Aligned Image Translation
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Lee, Seungmi; Yun, Kwan; Noh, Junyong; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    We introduce StyleMM, a novel framework that can construct a stylized 3D Morphable Model (3DMM) based on user-defined text descriptions specifying a target style. Building upon a pre-trained mesh deformation network and a texture generator for original 3DMM-based realistic human faces, our approach fine-tunes these models using stylized facial images generated via text-guided image-to-image (i2i) translation with a diffusion model, which serve as stylization targets for the rendered mesh. To prevent undesired changes in identity, facial alignment, or expressions during i2i translation, we introduce a stylization method that explicitly preserves the facial attributes of the source image. By maintaining these critical attributes during image stylization, the proposed approach ensures consistent 3D style transfer across the 3DMM parameter space through imagebased training. Once trained, StyleMM enables feed-forward generation of stylized face meshes with explicit control over shape, expression, and texture parameters, producing meshes with consistent vertex connectivity and animatability. Quantitative and qualitative evaluations demonstrate that our approach outperforms state-of-the-art methods in terms of identity-level facial diversity and stylization capability. The code and videos are available at kwanyun.github.io/stylemm_page.
  • Item
    Projective Displacement Mapping for Ray Traced Editable Surfaces
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Hoetzlein, Rama; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Displacement mapping is an important tool for modeling detailed geometric features. We explore the problem of authoring complex surfaces while ray tracing interactively. Current techniques for ray tracing displaced surfaces rely on acceleration structures that require dynamic rebuilding when edited. These techniques are typically used for massive static scenes or the compression of detailed source assets. Our interest lies in modeling and look development of artistic features with real-time ray tracing. We introduce projective displacement mapping as a direct sampling method combined with a hardware BVH. Quality and performance are improved over existing methods with smoothed displaced normals, thin feature sampling, tight prism bounds and ray bi-linear patch intersections.
  • Item
    Self-Supervised Humidity-Controllable Garment Simulation via Capillary Bridge Modeling
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Shi, Min; Wang, Xinran; Zhang, Jia-Qi; Gao, Lin; Zhu, Dengming; Zhang, Hongyan; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Simulating wet clothing remains a significant challenge due to the complex physical interactions between moist fabric and the human body, compounded by the lack of dedicated datasets for training data-driven models. Existing self-supervised approaches struggle to capture moisture-induced dynamics such as skin adhesion, anisotropic surface resistance, and non-linear wrinkling, leading to limited accuracy and efficiency. To address this, we present SHGS, a novel self-supervised framework for humidity-controllable clothing simulation grounded in the physical modeling of capillary bridges that form between fabric and skin. We abstract the forces induced by wetness into two physically motivated components: a normal adhesive force derived from Laplace pressure and a tangential shear-resistance force that opposes relative motion along the fabric surface. By formulating these forces as potential energy for conservative effects and as mechanical work for non-conservative effects, we construct a physics-consistent wetness loss. This enables self-supervised training without requiring labeled data of wet clothing. Our humidity-sensitive dynamics are driven by a multi-layer graph neural network, which facilitates a smooth and physically realistic transition between different moisture levels. This architecture decouples the garment's dynamics in wet and dry states through a local weight interpolation mechanism, adjusting the fabric's behavior in response to varying humidity conditions. Experiments demonstrate that SHGS outperforms existing methods in both visual fidelity and computational efficiency, marking a significant advancement in realistic wet-cloth simulation.
  • Item
    Computational Design of Body-Supporting Assemblies
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) He, Yixuan; Chen, Rulin; Deng, Bailin; Song, Peng; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    A body-supporting assembly is an assembly of parts that physically supports a human body during activities like sitting, lying, or leaning. A body-supporting assembly has a complex global shape to support a specific human body posture, yet each component part has a relatively simple geometry to facilitate fabrication, storage, and maintenance. In this paper, we aim to model and design a personalized body-supporting assembly that fits a given human body posture, aiming to make the assembly comfortable to use. We choose to model a body-supporting assembly from scratch to offer high flexibility for fitting a given body posture, which however makes it challenging to determine the assembly's topology and geometry. To address this problem, we classify parts in the assembly into two categories according the functionality: supporting parts for fitting different portions of the body and connecting parts for connecting all the supporting parts to form a stable structure. We also propose a geometric representation of supporting parts such that they can have a variety of shapes controlled by a few parameters. Given a body posture as input, we present a computational approach for designing a body-supporting assembly that fits the posture, in which the supporting parts are initialized and optimized to minimize a discomfort measure and then the connecting parts are generated using a procedural approach. We demonstrate the effectiveness of our approach by designing body-supporting assemblies that accommodate to a variety of body postures and 3D printing two of them for physical validation.
  • Item
    Swept Volume Computation with Enhanced Geometric Detail Preservation
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Wang, Pengfei; Yang, Yuexin; Chen, Shuangmin; Xin, Shiqing; Tu, Changhe; Wang, Wenping; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Swept volume computation-the determination of regions occupied by moving objects-is essential in graphics, robotics, and manufacturing. Existing approaches either explicitly track surfaces, suffering from robustness issues under complex interactions, or employ implicit representations that trade off geometric fidelity and face optimization difficulties. We propose a novel inversion of motion perspective: rather than tracking object motion, we fix the object and trace spatial points backward in time, reducing complex trajectories to efficiently linearizable point motions. Based on this, we introduce a multi-field tetrahedral framework that maintains multiple distance fileds per element, preserving fine geometric details at trajectory intersections where single-field methods fail. Our method robustly computes swept volumes for diverse motions, including translations and screw motions, and enables practical applications in path planning and collision detection.
  • Item
    Uncertainty-Aware Adjustment via Learnable Coefficients for Detailed 3D Reconstruction of Clothed Humans from Single Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Yang, Yadan; Li, Yunze; Ying, Fangli; Phaphuangwittayakul, Aniwat; Dhuny, Riyad; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Although single-image 3D human reconstruction has made significant progress in recent years, few of the current state-of-theart methods can accurately restore the appearance and geometric details of loose clothing. To achieve high-quality reconstruction of a human body wearing loose clothing, we propose a learnable dynamic adjustment framework that integrates side-view features and the uncertainty of the parametric human body model to adaptively regulate its reliability based on the clothing type. Specifically, we first adopt the Vision Transformer model as an encoder to capture the image features of the input image, and then employ SMPL-X to decouple the side-view body features. Secondly, to reduce the limitations imposed by the regularization of the parametric model, particularly for loose garments, we introduce a learnable coefficient to reduce the reliance on SMPLX. This strategy effectively accommodates the large deformations caused by loose clothing, thereby accurately expressing the posture and clothing in the image. To evaluate the effectiveness, we validate our method on the public CLOTH4D and Cape datasets, and the experimental results demonstrate better performance compared to existing approaches. The code is available at https://github.com/yyd0613/CoRe-Human.
  • Item
    ClothingTwin: Reconstructing Inner and Outer Layers of Clothing Using 3D Gaussian Splatting
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Jung, Munkyung; Lee, Dohae; Lee, In-Kwon; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    We introduce ClothingTwin, a novel end-to-end framework for reconstructing 3D digital twins of clothing that capture both the outer and inner fabric -without the need for manual mannequin removal. Traditional 2D ''ghost mannequin'' photography techniques remove the mannequin and composite partial inner textures to create images in which the garment appears as if it were worn by a transparent model. However, extending such method to photorealistic 3D Gaussian Splatting (3DGS) is far more challenging. Achieving consistent inner-layer compositing across the large sets of images used for 3DGS optimization quickly becomes impractical if done manually. To address these issues, ClothingTwin introduces three key innovations. First, a specialized image acquisition protocol captures two sets of images for each garment: one worn normally on the mannequin (outer layer exposed) and one worn inside-out (inner layer exposed). This eliminates the need to painstakingly edit out mannequins in thousands of images and provides full coverage of all fabric surfaces. Second, we employ a mesh-guided 3DGS reconstruction for each layer and leverage Non-Rigid Iterative Closest Point (ICP) to align outer and inner point-clouds despite distinct geometries. Third, our enhanced rendering pipeline-featuring mesh-guided back-face culling, back-to-front alpha blending, and recalculated spherical harmonic angles-ensures photorealistic visualization of the combined outer and inner layers without inter-layer artifacts. Experimental evaluations on various garments show that ClothingTwin outperforms conventional 3DGS-based methods, and our ablation study validates the effectiveness of each proposed component.
  • Item
    DAATSim: Depth-Aware Atmospheric Turbulence Simulation for Fast Image Rendering
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Saha, Ripon Kumar; Zhang, Yufan; Ye, Jinwei; Jayasuriya, Suren; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Simulating the effects of atmospheric turbulence for imaging systems operating over long distances is a significant challenge for optical and computer graphics models. Physically-based ray tracing over kilometers of distance is difficult due to the need to define a spatio-temporal volume of varying refractive index. Even if such a volume can be defined, Monte Carlo rendering approximations for light refraction through the environment would not yield real-time solutions needed for video game engines or online dataset augmentation for machine learning. While existing simulators based on procedurally-generated noise or textures have been proposed in these settings, these simulators often neglect the significant impact of scene depth, leading to unrealistic degradations for scenes with substantial foreground-background separation. This paper introduces a novel, physically-based atmospheric turbulence simulator that explicitly models depth-dependent effects while rendering frames at interactive/near real-time (> 10 FPS) rates for image resolutions up to 1024×1024 (real-time 35 FPS at 256×256 resolution with depth or 512×512 at 33 FPS without depth). Our hybrid approach combines spatially-varying wavefront aberrations using Zernike polynomials with pixel-wise depth modulation of both blur (via Point Spread Function interpolation) and geometric distortion or tilt. Our approach includes a novel fusion technique that integrates complementary strengths of leading monocular depth estimators to generate metrically accurate depth maps with enhanced edge fidelity. DAATSim is implemented efficiently on GPUs using Py- Torch incorporating optimizations like mixed-precision computation and caching to achieve efficient performance. We present quantitative and qualitative validation demonstrating the simulator's physical plausibility for generating turbulent video. DAATSim is made publicly available and open-source to the community: https://github.com/Riponcs/DAATSim.
  • Item
    TensoIS: A Step Towards Feed-Forward Tensorial Inverse Subsurface Scattering for Perlin Distributed Heterogeneous Media
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Tiwari, Ashish; Bhardwaj, Satyam; Bachwana, Yash; Sahu, Parag Sarvoday; Ali, T. M. Feroz; Chintalapati, Bhargava; Raman, Shanmuganathan; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Estimating scattering parameters of heterogeneous media from images is a severely under-constrained and challenging problem. Most of the existing approaches model BSSRDF either through an analysis-by-synthesis approach, approximating complex path integrals, or using differentiable volume rendering techniques to account for heterogeneity. However, only a few studies have applied learning-based methods to estimate subsurface scattering parameters, but they assume homogeneous media. Interestingly, no specific distribution is known to us that can explicitly model the heterogeneous scattering parameters in the real world. Notably, procedural noise models such as Perlin and Fractal Perlin noise have been effective in representing intricate heterogeneities of natural, organic, and inorganic surfaces. Leveraging this, we first create HeteroSynth, a synthetic dataset comprising photorealistic images of heterogeneous media whose scattering parameters are modeled using Fractal Perlin noise. Furthermore, we propose Tensorial Inverse Scattering (TensoIS), a learning-based feed-forward framework to estimate these Perlin-distributed heterogeneous scattering parameters from sparse multi-view image observations. Instead of directly predicting the 3D scattering parameter volume, TensoIS uses learnable low-rank tensor components to represent the scattering volume. We evaluate TensoIS on unseen heterogeneous variations over shapes from the HeteroSynth test set, smoke and cloud geometries obtained from open-source realistic volumetric simulations, and some real-world samples to establish its effectiveness for inverse scattering. Overall, this study is an attempt to explore Perlin noise distribution, given the lack of any such well-defined distribution in literature, to potentially model real-world heterogeneous scattering in a feed-forward manner. Project Page: https://yashbachwana.github.io/TensoIS/
  • Item
    Automatic Reconstruction of Woven Cloth from a Single Close-up Image
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Wu, Chenghao; Khattar, Apoorv; Zhu, Junqiu; Pettifer, Steve; Yan, Lingqi; Montazeri, Zahra; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Digital replication of woven fabrics presents significant challenges across a variety of sectors, from online retail to entertainment industries. To address this, we introduce an inverse rendering pipeline designed to estimate pattern, geometry, and appearance parameters of woven fabrics given a single close-up image as input. Our work is capable of simultaneously optimizing both discrete and continuous parameters without manual interventions. It outputs a wide array of parameters, encompassing discrete elements like weave patterns, ply and fiber number, using Simulated Annealing. It also recovers continuous parameters such as reflection and transmission components, aligning them with the target appearance through differentiable rendering. For irregularities caused by deformation and flyaways, we use 2D Gaussians to approximate them as a post-processing step. Our work does not pursue perfect matching of all fine details, it targets an automatic and end-to-end reconstruction pipeline that is robust to slight camera rotations and room light conditions within an acceptable time (15 minutes on CPU), unlike previous works which are either expensive, require manual intervention, assume given pattern, geometry or appearance, or strictly control camera and light conditions.
  • Item
    MF-SDF: Neural Implicit Surface Reconstruction using Mixed Incident Illumination and Fourier Feature Optimization
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Zhou, Xueyang; Shen, Xukun; Hu, Yong; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    The utilization of neural implicit surface as a geometry representation has proven to be an effective multi-view surface reconstruction method. Despite the promising results achieved, reconstructing geometry from objects in real-world scenes remains challenging due to the interaction between surface materials and complex ambient light, as well as shadow effects caused by self-occlusion, making it a highly ill-posed problem. To address this challenge, we propose MF-SDF, a method that use a hybrid neural network and spherical gaussian representation to model environmental lighting, so that the model can express the situation of multiple light sources including directional light (such as outdoor sunlight) in real-world scenarios. Benefit from this, our method effectively reconstructs coherent surfaces and accurately locates the shadow location on the surface. Furthermore, we adopt a shadow aware multi-view photometric consistency loss, which mitigates the erroneous reconstruction results of previous methods on surfaces containing shadows, thereby improve the overall smoothness of the surface. Additionally, unlike previous approaches that directly optimize spatial features, we propose a Fourier feature optimization method that directly optimizes the tensorial feature in the frequency domain. By optimizing the high-frequency components, this approach further enhances the details of surface reconstruction. Finally, through experiments, we demonstrate that our method outperforms existing methods in terms of reconstruction accuracy on real captured data.
  • Item
    Feature Disentanglement in GANs for Photorealistic Multi-view Hair Transfer
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Xu, Jiayi; Wu, Zhengyang; Zhang, Chenming; Jin, Xiaogang; Ji, Yaohua; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Fast and highly realistic multi-view hair transfer plays a crucial role in evaluating the effectiveness of virtual hair try-on systems. However, GAN-based generation and editing methods face persistent challenges in feature disentanglement. Achieving pixel-level, attribute-specific modifications-such as changing hairstyle or hair color without affecting other facial features- remains a long-standing problem. To address this limitation, we propose a novel multi-view hair transfer framework that leverages a hair-only intermediate facial representation and a 3D-guided masking mechanism. Our approach disentangles triplane facial features into spatial geometric components and global style descriptors, enabling independent and precise control over hairstyle and hair color. By introducing a dedicated intermediate representation focused solely on hair and incorporating a two-stage feature fusion strategy guided by the generated 3D mask, our framework achieves fine-grained local editing across multiple viewpoints while preserving facial integrity and improving background consistency. Extensive experiments demonstrate that our method produces visually compelling and natural results in side-to-front view hair transfer tasks, offering a robust and flexible solution for high-fidelity hair reconstruction and manipulation.
  • Item
    Region-Aware Sparse Attention Network for Lane Detection
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Deng, Yan; Xiao, Guoqiang; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Lane detection is a fundamental task in intelligent driving systems. However, the slender and sparse structure of lanes, combined with the dominance of irrelevant background regions in road scenes, makes accurate lane localization particularly challenging, especially under complex and adverse conditions. To address these issues, we propose a novel Region-Aware Sparse Attention Network (RSANet), which is designed to selectively enhance lane-relevant features while suppressing background interference. Specifically, we introduce the Region-guided Pooling Predictor (RPP) that generates lane region activation maps to guide the backbone network in focusing on informative areas. To improve the multi-scale feature fusion capability of the Feature Pyramid Network (FPN), we propose the Bilateral Pooling Attention Module (BPAM) that captures discriminative features by jointly modeling dependencies along both the channel and spatial dimensions. Furthermore, the Lane-guided Sparse Attention Mechanism (LSAM) efficiently aggregates global contextual information from the most relevant spatial regions to reinforce lane prior representations while significantly reducing redundant computation. Extensive experiments on benchmark datasets demonstrate that RSANet outperforms state-of-the-art methods in a variety of challenging scenarios. Notably, RSANet achieves an F1@50 score of 80.04% on the CULane dataset that shows notable improvements.
  • Item
    View-Independent Wire Art Modeling via Manifold Fitting
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Huang, HuiGuang; Wu, Dong-Yi; Wang, Yulin; Cao, Yu; Lee, Tong-Yee; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    This paper presents a novel fully automated method for generating view-independent abstract wire art from 3D models. The main challenge in creating line art is to strike a balance among abstraction, structural clarity, 3D perception, and consistent aesthetics from different viewpoints. Many existing approaches have been proposed, including extracting wire art from mesh, reconstructing it from pictures, etc. But they all suffer from the fact that the wires are usually very unorganized and cumbersome and usually can only guarantee the observation effect of specific viewpoints. To overcome these problems, we propose a paradigm shift: instead of predicting the line segments directly, we consider the generation of wire art as an optimizationdriven manifold-fitting problem. Thus we can abstract/generalize the 3D model while retaining the key properties necessary for appealing line art, including structural topology and connectivity, and maintain the three-dimensionality of the line art with a multi-perspective view. Experimental results show that our view-independent method outperforms previous methods in terms of line simplicity, shape fidelity, and visual consistency.
  • Item
    GS-Share: Enabling High-fidelity Map Sharing with Incremental Gaussian Splatting
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Zhang, Xinran; Zhu, Hanqi; Duan, Yifan; Zhang, Yanyong; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Constructing and sharing 3D maps is essential for many applications, including autonomous driving and augmented reality. Recently, 3D Gaussian splatting has emerged as a promising approach for accurate 3D reconstruction. However, a practical map-sharing system that features high-fidelity, continuous updates, and network efficiency remains elusive. To address these challenges, we introduce GS-Share, a photorealistic map-sharing system with a compact representation. The core of GS-Share includes anchor-based global map construction, virtual-image-based map enhancement, and incremental map update. We evaluate GS-Share against state-of-the-art methods, demonstrating that our system achieves higher fidelity, particularly for extrapolated views, with improvements of 11%, 22%, and 74% in PSNR, LPIPS, and Depth L1, respectively. Furthermore, GS-Share is significantly more compact, reducing map transmission overhead by 36%.
  • Item
    FlatCAD: Fast Curvature Regularization of Neural SDFs for CAD Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Yin, Haotian; Plocharski, Aleksander; Wlodarczyk, Michal Jan; Kida, Mikolaj; Musialski, Przemyslaw; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Neural signed-distance fields (SDFs) are a versatile backbone for neural geometry representation, but enforcing CAD-style developability usually requires Gaussian-curvature penalties with full Hessian evaluation and second-order differentiation, which are costly in memory and time. We introduce an off-diagonal Weingarten loss that regularizes only the mixed shape operator term that represents the gap between principal curvatures and flattens the surface. We present two variants: a finitedifference version using six SDF evaluations plus one gradient, and an auto-diff version using a single Hessian-vector product. Both converge to the exact mixed term and preserve the intended geometric properties without assembling the full Hessian. On the ABC benchmarks the losses match or exceed Hessian-based baselines while cutting GPU memory and training time by roughly a factor of two. The method is drop-in and framework-agnostic, enabling scalable curvature-aware SDF learning for engineering-grade shape reconstruction. Our code is available at https://flatcad.github.io/.
  • Item
    A Solver-Aided Hierarchical Language for LLM-Driven CAD Design
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Jones, Ben T.; Zhang, Zihan; Hähnlein, Felix; Matusik, Wojciech; Ahmad, Maaz; Kim, Vladimir; Schulz, Adriana; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Parametric CAD systems use domain-specific languages (DSLs) to represent geometry as programs, enabling both flexible modeling and structured editing. With the rise of large language models (LLMs), there is growing interest in generating such programs from natural language. This raises a key question: what kind of DSL best supports both CAD generation and editing, whether performed by a human or an AI? In this work, we introduce AIDL, a hierarchical, solver-aided DSL designed to align with the strengths of LLMs while remaining interpretable and editable by humans. AIDL enables high-level reasoning by breaking problems into abstract components and structural relationships, while offloading low-level geometric reasoning to a constraint solver. We evaluate AIDL in a 2D text-to-CAD setting using a zero-shot prompt-based interface and compare it to OpenSCAD, a widely used CAD DSL that appears in LLM training data. AIDL produces results that are visually competitive and significantly easier to edit. Our findings suggest that language design is a powerful complement to model training and prompt engineering for building collaborative AI-human tools in CAD. Code is available at https://github.com/deGravity/aidl.
  • Item
    SPG: Style-Prompting Guidance for Style-Specific Content Creation
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Liang, Qian; Chen, Zichong; Zhou, Yang; Huang, Hui; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Although recent text-to-image (T2I) diffusion models excel at aligning generated images with textual prompts, controlling the visual style of the output remains a challenging task. In this work, we propose Style-Prompting Guidance (SPG), a novel sampling strategy for style-specific image generation. SPG constructs a style noise vector and leverages its directional deviation from unconditional noise to guide the diffusion process toward the target style distribution. By integrating SPG with Classifier- Free Guidance (CFG), our method achieves both semantic fidelity and style consistency. SPG is simple, robust, and compatible with controllable frameworks like ControlNet and IPAdapter, making it practical and widely applicable. Extensive experiments demonstrate the effectiveness and generality of our approach compared to state-of-the-art methods. Code is available at https://github.com/Rumbling281441/SPG.
  • Item
    Introducing Unbiased Depth into 2D Gaussian Splatting for High-accuracy Surface Reconstruction
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Yang, Yixin; Zhou, Yang; Huang, Hui; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Recently, 2D Gaussian Splatting (2DGS) has demonstrated superior geometry reconstruction quality than the popular 3DGS by using 2D surfels to approximate thin surfaces. However, it falls short when dealing with glossy surfaces, resulting in visible holes in these areas. We find that the reflection discontinuity causes the issue. To fit the jump from diffuse to specular reflection at different viewing angles, depth bias is introduced in the optimized Gaussian primitives. To address that, we first replace the depth distortion loss in 2DGS with a novel depth convergence loss, which imposes a strong constraint on depth continuity. Then, we rectify the depth criterion in determining the actual surface, which fully accounts for all the intersecting Gaussians along the ray. Qualitative and quantitative evaluations across various datasets reveal that our method significantly improves reconstruction quality, with more complete and accurate surfaces than 2DGS. Code is available at https://github.com/ XiaoXinyyx/Unbiased_Surfel.
  • Item
    Joint Deblurring and 3D Reconstruction for Macrophotography
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Zhao, Yifan; Li, Liangchen; Zhou, Yuqi; Wang, Kai; Liang, Yan; Zhang, Juyong; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Macro lens has the advantages of high resolution and large magnification, and 3D modeling of small and detailed objects can provide richer information. However, defocus blur in macrophotography is a long-standing problem that heavily hinders the clear imaging of the captured objects and high-quality 3D reconstruction of them. Traditional image deblurring methods require a large number of images and annotations, and there is currently no multi-view 3D reconstruction method for macrophotography. In this work, we propose a joint deblurring and 3D reconstruction method for macrophotography. Starting from multi-view blurry images captured, we jointly optimize the clear 3D model of the object and the defocus blur kernel of each pixel. The entire framework adopts a differentiable rendering method to self-supervise the optimization of the 3D model and the defocus blur kernel. Extensive experiments show that from a small number of multi-view images, our proposed method can not only achieve high-quality image deblurring but also recover high-fidelity 3D appearance.
  • Item
    BoxFusion: Reconstruction-Free Open-Vocabulary 3D Object Detection via Real-Time Multi-View Box Fusion
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Lan, Yuqing; Zhu, Chenyang; Gao, Zhirui; Zhang, Jiazhao; Cao, Yihan; Yi, Renjiao; Wang, Yijie; Xu, Kai; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Open-vocabulary 3D object detection has gained significant interest due to its critical applications in autonomous driving and embodied AI. Existing detection methods, whether offline or online, typically rely on dense point cloud reconstruction, which imposes substantial computational overhead and memory constraints, hindering real-time deployment in downstream tasks. To address this, we propose a novel reconstruction-free online framework tailored for memory-efficient and real-time 3D detection. Specifically, given streaming posed RGB-D video input, we leverage Cubify Anything as a pre-trained visual foundation model (VFM) for single-view 3D object detection, coupled with CLIP to capture open-vocabulary semantics of detected objects. To fuse all detected bounding boxes across different views into a unified one, we employ an association module for correspondences of multi-views and an optimization module to fuse the 3D bounding boxes of the same instance. The association module utilizes 3D Non-Maximum Suppression (NMS) and a box correspondence matching module. The optimization module uses an IoU-guided efficient random optimization technique based on particle filtering to enforce multi-view consistency of the 3D bounding boxes while minimizing computational complexity. Extensive experiments on CA-1M and ScanNetV2 datasets demonstrate that our method achieves state-of-the-art performance among online methods. Benefiting from this novel reconstruction-free paradigm for 3D object detection, our method exhibits great generalization abilities in various scenarios, enabling real-time perception even in environments exceeding 1000 square meters.
  • Item
    Hybrid Sparse Transformer and Feature Alignment for Efficient Image Completion
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Chen, L.; Sun, Hao; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    In this paper, we propose an efficient single-stage hybrid architecture for image completion. Existing transformer-based image completion methods often struggle with accurate content restoration, largely due to their ineffective modeling of corrupted channel information and the attention noise introduced by softmax-based mechanisms, which results in blurry textures and distorted structures. Additionally, these methods frequently fail to maintain texture consistency, either relying on imprecise mask sampling or incurring substantial computational costs from complex similarity calculations. To address these limitations, we present two key contributions: a Hybrid Sparse Self-Attention (HSA) module and a Feature Alignment Module (FAM). The HSA module enhances structural recovery by decoupling spatial and channel attention with sparse activation, while the FAM enforces texture consistency by aligning encoder and decoder features via a mask-free, energy-gated mechanism without additional inference cost. Our method achieves state-of-the-art image completion results with the fastest inference speed among single-stage networks, as measured by PSNR, SSIM, FID, and LPIPS on CelebA-HQ, Places2, and Paris datasets.
  • Item
    G-SplatGAN: Disentangled 3D Gaussian Generation for Complex Shapes via Multi-Scale Patch Discriminators
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Li, Jiaqi; Dang, Haochuan; Zhou, Zhi; Zhu, Junke; Huang, Zhangjin; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Generating 3D objects with complex topologies from monocular images remains a challenge in computer graphics, due to the difficulty of modeling varying 3D shapes with disentangled, steerable geometry and visual attributes. While NeRF-based methods suffer from slow volumetric rendering and limited structural controllability. Recent advances in 3D Gaussian Splatting provide a more efficient alternative and its generative modeling with separate control over structure and appearance remains underexplored. In this paper, we propose G-SplatGAN, a novel 3D-aware generation framework that combines the rendering efficiency of 3D Gaussian Splatting with disentangled latent modeling. Starting from a shared Gaussian template, our method uses dual modulation branches to modulate geometry and appearance from independent latent codes, enabling precise shape manipulation and controllable generation. We adopt a progressive adversarial training scheme with multi-scale and patchbased discriminators to capture both global structure and local detail. Our model requires no 3D supervision and is trained on monocular images with known camera poses, reducing data reliance while supporting real image inversion through a geometryaware encoder. Experiments show that G-SplatGAN achieves superior performance in rendering speed, controllability and image fidelity, offering a compelling solution for controllable 3D generation using Gaussian representations.
  • Item
    TopoGen: Topology-Aware 3D Generation with Persistence Points
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Hu, Jiangbei; Fei, Ben; Xu, Baixin; Hou, Fei; Wang, Shengfa; Lei, Na; Yang, Weidong; Qian, Chen; He, Ying; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Topological properties play a crucial role in the analysis, reconstruction, and generation of 3D shapes. Yet, most existing research focuses primarily on geometric features, due to the lack of effective representations for topology. In this paper, we introduce TopoGen, a method that extracts both discrete and continuous topological descriptors-Betti numbers and persistence points-using persistent homology. These features provide robust characterizations of 3D shapes in terms of their topology. We incorporate them as conditional guidance in generative models for 3D shape synthesis, enabling topology-aware generation from diverse inputs such as sparse and partial point clouds, as well as sketches. Furthermore, by modifying persistence points, we can explicitly control and alter the topology of generated shapes. Experimental results demonstrate that TopoGen enhances both diversity and controllability in 3D generation by embedding global topological structure into the synthesis process.
  • Item
    Accelerating Signed Distance Functions
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Hubert-Brierre, Pierre; Guérin, Eric; Peytavie, Adrien; Galin, Eric; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Processing and particularly visualizing implicit surfaces remains computationally intensive when dealing with complex objects built from construction trees. We introduce optimization nodes to reduce the computational cost of the field function evaluation for hierarchical construction trees, while preserving the Lipschitz or conservative properties of the function. Our goal is to propose acceleration nodes directly embedded in the construction tree, and avoid external, accompanying data-structures such as octrees. We present proxy and continuous level of detail nodes to reduce the overall evaluation cost, along with a normal warping technique that enhances surface details with negligible computational overhead. Our approach is compatible with existing algorithms that aim at reducing the number of function calls. We validate our methods by computing timings as well as the average cost for traversing the tree and evaluating the signed distance field at a given point in space. Our method speeds-up signed distance field evaluation by up to three orders or magnitude, and applies both to ray-surface intersection computation in Sphere Tracing applications, and to polygonization algorithms.
  • Item
    Using Saliency for Semantic Image Abstractions in Robotic Painting
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Stroh, Michael; Paetzold, Patrick; Berio, Daniel; Kehlbeck, Rebecca; Leymarie, Frederic Fol; Deussen, Oliver; Faraj, Noura; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    We present an adaptive, semantics-based abstraction approach that balances aesthetic quality and structural coherence within the practical constraints of robotic painting. We apply panoptic segmentation with color-based over-segmentation to partition images into meaningful regions aligned with semantic objects, while providing flexible abstraction levels. Automatic parameter selection for region merging is enabled by semantic saliency maps, derived from Out-of-Distribution segmentation techniques in combination with machine learning methods for feature detection. This preserves the boundaries of salient objects while simplifying less prominent regions. A graph-based community detection step further refines the abstraction by grouping regions according to local connectivity and semantic coherence. The runtime of our method outperforms optimization-based image vectorization methods, enabling the efficient generation of multiple abstraction levels that can serve as hierarchical layers for robotic painting. We demonstrate the quality of our method by showing abstraction results, robotic paintings with the e-David robot, and a comparison to other abstraction methods.
  • Item
    EmoDiffGes: Emotion-Aware Co-Speech Holistic Gesture Generation with Progressive Synergistic Diffusion
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Li, Xinru; Lin, Jingzhong; Zhang, Bohao; Qi, Yuanyuan; Wang, Changbo; He, Gaoqi; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Co-speech gesture generation, driven by emotional expression and synergistic bodily movements, is essential for applications such as virtual avatars and human-robot interaction. Existing co-speech gesture generation methods face two fundamental limitations: (1) producing inexpressive gestures due to ignoring the temporal evolution of emotion; (2) generating incoherent and unnatural motions as a result of either holistic body oversimplification or independent part modeling. To address the above limitations, we propose EmoDiffGes, a diffusion-based framework grounded in embodied emotion theory, unifying dynamic emotion conditioning and part-aware synergistic modeling. Specifically, a Dynamic Emotion-Alignment Module (DEAM) is first applied to extract dynamic emotional cues and inject emotion guidance into the generation process. Then, a Progressive Synergistic Gesture Generator (PSGG) iteratively refines region-specific latent codes while maintaining full-body coordination, leveraging a Body Region Prior for part-specific encoding and Progressive Inter-Region Synergistic Flow for global motion coherence. Extensive experiments validate the effectiveness of our methods, showcasing the potential for generating expressive, coordinated, and emotionally grounded human gestures.
  • Item
    LTC-IR: Multiview Edge-Aware Inverse Rendering with Linearly Transformed Cosines
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Park, Dabeen; Park, Junsuh; Son, Jooeun; Lee, Seungyong; Lee, Joo Ho; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Decomposing environmental lighting and materials is challenging as they are tightly intertwined and integrated over the hemisphere. In order to precisely decouple them, the lighting representation must represent general image features such as object boundaries or texture contrast, called light edges, which are often neglected in the existing inverse rendering methods. In this paper, we propose an inverse rendering method that efficiently captures light edges. We introduce a triangle mesh-based light representation that can express light edges by aligning triangle edges with light edges. We exploit the linearly transformed cosines as BRDF approximations to efficiently compute environmental lighting with our light representation. Our edge-aware inverse rendering precisely decouples distributions of reflectance and lighting through differentiable rendering by jointly reconstructing light edges and estimating the BRDF parameters. Our experiments, including various material/scene settings and ablation studies, demonstrate the reconstruction performance and computational efficiency of our method.
  • Item
    Text-Guided Diffusion with Spectral Convolution for 3D Human Pose Estimation
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Shi, Liyuan; Wu, Suping; Yang, Sheng; Qiu, Weibin; Qiang, Dong; Zhao, Jiarui; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Although significant progress has been made in monocular video-based 3D human pose estimation, existing methods lack guidance from fine-grained high-level prior knowledge such as action semantics and camera viewpoints, leading to significant challenges for pose reconstruction accuracy under scenarios with severely missing visual features, i.e., complex occlusion situations. We identify that the 3D human pose estimation task fundamentally constitutes a canonical inverse problem, and propose a motion-semantics-based diffusion(MS-Diff) framework to address this issue by incorporating high-level motion semantics with spectral feature regularization to eliminate interference noise in complex scenes and improve estimation accuracy. Specifically, we design a Multimodal Diffusion Interaction (MDI) module that incorporates motion semantics including action categories and camera viewpoints into the diffusion process, establishing semantic-visual feature alignment through a cross-modal mechanism to resolve pose ambiguities and effectively handle occlusions. Additionally, we leverage a Spectral Convolutional Regularization (SCR) module that implements adaptive filtering in the frequency domain to selectively suppress noise components. Extensive experiments on large-scale public datasets Human3.6M and MPI-INF-3DHP demonstrate that our method achieves state-of-the-art performance.
  • Item
    FAHNet: Accurate and Robust Normal Estimation for Point Clouds via Frequency-Aware Hierarchical Geometry
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Wang, Chengwei; Wu, Wenming; Fei, Yue; Zhang, Gaofeng; Zheng, Liping; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Point cloud normal estimation underpins many 3D vision and graphics applications. Precise normal estimation in regions of sharp curvature and high-frequency variation remains a major bottleneck; existing learning-based methods still struggle to isolate fine geometry details under noise and uneven sampling. We present FAHNet, a novel frequency-aware hierarchical network that precisely tackles those challenges. Our Frequency-Aware Hierarchical Geometry (FAHG) feature extraction module selectively amplifies and merges cross-scale cues, ensuring that both fine-grained local features and sharp structures are faithfully represented. Crucially, a dedicated Frequency-Aware geometry enhancement (FA) branch intensifies sensitivity to abrupt normal transitions and sharp features, preventing the common over-smoothing limitation. Extensive experiments on synthetic benchmarks (PCPNet, FamousShape) and real-world scans (SceneNN) demonstrate that FAHNet outperforms state-of-the-art approaches in normal estimation accuracy. Ablation studies further quantify the contribution of each component, and downstream surface reconstruction results validate the practical impact of our design.
  • Item
    Gaussian Splatting for Large-Scale Aerial Scene Reconstruction From Ultra-High-Resolution Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Sun, Qiulin; Lai, Wei; Li, Yixian; Zhang, Yanci; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Using 3D Gaussian splatting to reconstruct large-scale aerial scenes from ultra-high-resolution images is still a challenge problem because of two memory bottlenecks - excessive Gaussian primitives and the tensor sizes for ultra-high-resolution images. In this paper, we propose a task partitioning algorithm that operates in both object and image space to generate a set of small-scale subtasks. Each subtask's memory footprints is strictly limited, enabling training on a single high-end consumer-grade GPU. More specifically, Gaussian primitives are clustered into blocks in object space, and the input images are partitioned into sub-images according to the projected footprints of these blocks. This dual-space partitioning significantly reduces training memory requirements. During subtask training, we propose a depth comparison method to generate a mask map for each sub-image. This mask map isolates pixels primarily contributed by the Gaussian primitives of the current subtask, excluding all other pixels from training. Experimental results demonstrate that our method successfully achieves large-scale aerial scene reconstruction using 9K resolution images on a single RTX 4090 GPU. The novel views synthesized by our method retain significantly more details than those from current state-of-the-art methods.
  • Item
    PARC: A Two-Stage Multi-Modal Framework for Point Cloud Completion
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Cai, Yujiao; Su, Yuhao; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Point cloud completion is vital for accurate 3D reconstruction, yet real world scans frequently exhibit large structural gaps that compromise recovery. Meanwhile, in 2D vision, VAR (Visual Auto-Regression) has demonstrated that a coarse-to-fine ''nextscale prediction'' can significantly improve generation quality, inference speed, and generalization. Because this coarse-to-fine approach closely aligns with the progressive nature of filling missing geometry in point clouds, we were inspired to develop PARC (Patch-Aware Coarse-to-Fine Refinement Completion), a two-stage multimodal framework specifically designed for handling missing structures. In the pretraining stage, PARC leverages complete point clouds alongside a Patch-Aware Coarse-to- Fine Refinement (PAR) strategy and a Mixture-of-Experts (MoE) architecture to generate high-quality local fragments, thereby improving geometric structure understanding and feature representation quality. During finetuning, the model is adapted to partial scans, further enhancing its resilience to incomplete inputs. To address remaining uncertainties in areas with missing structure, we introduce a dual-branch architecture that incorporates image cues: point cloud and image features are extracted independently and then fused via the MoE with an alignment loss, allowing complementary modalities to guide reconstruction in occluded or missing regions. Experiments conducted on the ShapeNet-ViPC dataset show that PARC has achieved highly competitive performance. Code is available at https://github.com/caiyujiaocyj/PARC.
  • Item
    PaMO: Parallel Mesh Optimization for Intersection-Free Low-Poly Modeling on the GPU
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Oh, Seonghun; Yuan, Xiaodi; Wei, Xinyue; Shi, Ruoxi; Xiang, Fanbo; Liu, Minghua; Su, Hao; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Reducing the triangle count in complex 3D models is a basic geometry preprocessing step in graphics pipelines such as efficient rendering and interactive editing. However, most existing mesh simplification methods exhibit a few issues. Firstly, they often lead to self-intersections during decimation, a major issue for applications such as 3D printing and soft-body simulation. Second, to perform simplification on a mesh in the wild, one would first need to perform re-meshing, which often suffers from surface shifts and losses of sharp features. Finally, existing re-meshing and simplification methods can take minutes when processing large-scale meshes, limiting their applications in practice. To address the challenges, we introduce a novel GPUbased mesh optimization approach containing three key components: (1) a parallel re-meshing algorithm to turn meshes in the wild into watertight, manifold, and intersection-free ones, and reduce the prevalence of poorly shaped triangles; (2) a robust parallel simplification algorithm with intersection-free guarantees; (3) an optimization-based safe projection algorithm to realign the simplified mesh with the input, eliminating the surface shift introduced by re-meshing and recovering the original sharp features. The algorithm demonstrates remarkable efficiency, simplifying a 2-million-face mesh to 20k triangles in 3 seconds on RTX4090. We evaluated the approach on the Thingi10K dataset and showcased its exceptional performance in geometry preservation and speed. https://seonghunn.github.io/pamo/
  • Item
    Multimodal 3D Few-Shot Classification via Gaussian Mixture Discriminant Analysis
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Wu, Yiqi; Wu, Huachao; Hu, Ronglei; Chen, Yilin; Zhang, Dejun; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    While pre-trained 3D vision-language models are becoming increasingly available, there remains a lack of frameworks that can effectively harness their capabilities for few-shot classification. In this work, we propose PointGMDA, a training-free framework that combines Gaussian Mixture Models (GMMs) with Gaussian Discriminant Analysis (GDA) to perform robust classification using only a few labeled point cloud samples. Our method estimatesGMMparameters per class from support data and computes mixture-weighted prototypes, which are then used in GDA with a shared covariance matrix to construct decision boundaries. This formulation allows us to model intra-class variability more expressively than traditional single-prototype approaches, while maintaining analytical tractability. To incorporate semantic priors, we integrate CLIP-style textual prompts and fuse predictions from geometric and textual modalities through a hybrid scoring strategy. We further introduce PointGMDA-T, a lightweight attention-guided refinement module that learns residuals for fast feature adaptation, improving robustness under distribution shift. Extensive experiments on ModelNet40 and ScanObjectNN demonstrate that PointGMDA outperforms strong baselines across a variety of few-shot settings, with consistent gains under both training-free and fine-tuned conditions. These results highlight the effectiveness and generality of our probabilistic modeling and multimodal adaptation framework. Our code is publicly available at https://github.com/djzgroup/PointGMDA.
  • Item
    Preconditioned Deformation Grids
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Kaltheuner, Julian; Oebel, Alexander; Droege, Hannah; Stotko, Patrick; Klein, Reinhard; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Dynamic surface reconstruction of objects from point cloud sequences is a challenging field in computer graphics. Existing approaches either require multiple regularization terms or extensive training data which, however, lead to compromises in reconstruction accuracy as well as over-smoothing or poor generalization to unseen objects and motions. To address these limitations, we introduce Preconditioned Deformation Grids, a novel technique for estimating coherent deformation fields directly from unstructured point cloud sequences without requiring or forming explicit correspondences. Key to our approach is the use of multi-resolution voxel grids that capture the overall motion at varying spatial scales, enabling a more flexible deformation representation. In conjunction with incorporating grid-based Sobolev preconditioning into gradient-based optimization, we show that applying a Chamfer loss between the input point clouds as well as to an evolving template mesh is sufficient to obtain accurate deformations. To ensure temporal consistency along the object surface, we include a weak isometry loss on mesh edges which complements the main objective without constraining deformation fidelity. Extensive evaluations demonstrate that our method achieves superior results, particularly for long sequences, compared to state-of-the-art techniques.
  • Item
    WaterGS: Physically-Based Imaging in Gaussian Splatting for Underwater Scene Reconstruction
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) , Su Qing Wang; Wu, Wen Bin; Shi, Min; Li, Zhao Xin; Wang, Qi; Zhu, Deng Ming; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Reconstructing underwater object geometry from multi-view images is a long-standing challenge in computer graphics, primarily due to image degradation caused by underwater scattering, blur, and color shift. These degradations severely impair feature extraction and multi-view consistency. Existing methods typically rely on pre-trained image enhancement models as a preprocessing step, but often struggle with robustness under varying water conditions. To overcome these limitations, we propose WaterGS, a novel framework for underwater surface reconstruction that jointly recovers accurate 3D geometry and restores true object colors. The core of our approach lies in introducing a Physically-Based imaging model into the rendering process of 2D Gaussian Splatting. This enables accurate separation of true object colors from water-induced distortions, thereby facilitating more robust photometric alignment and denser geometric reconstruction across views. Building upon this improved photometric consistency, we further introduce a Gaussian bundle adjustment scheme guided by our physical model to jointly optimize camera poses and geometry, enhancing reconstruction accuracy. Extensive experiments on synthetic and real-world datasets show that WaterGS achieves robust, high-fidelity reconstruction directly from raw underwater images, outperforming prior approaches in both geometric accuracy and visual consistency.
  • Item
    Gaussians on their Way: Wasserstein-Constrained 4D Gaussian Splatting with State-Space Modeling
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Deng, Junli; , Ping Shi; Luo, Yihao; Li, Qipei; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Dynamic scene rendering has taken a leap forward with the rise of 4D Gaussian Splatting, but there is still one elusive challenge: how to make 3D Gaussians move through time as naturally as they would in the real world, all while keeping the motion smooth and consistent. In this paper, we present an approach that blends state-space modeling with Wasserstein geometry, enabling a more fluid and coherent representation of dynamic scenes. We introduce a State Consistency Filter that merges prior predictions with the current observations, enabling Gaussians to maintain coherent trajectories over time. We also employ Wasserstein Consistency Constraint to ensure smooth, consistent updates of Gaussian parameters, reducing motion artifacts. Lastly, we leverage Wasserstein geometry to capture both translational motion and shape deformations, creating a more geometrically consistent model for dynamic scenes. Our approach models the evolution of Gaussians along geodesics on the manifold of Gaussian distributions, achieving smoother, more realistic motion and stronger temporal coherence. Experimental results show consistent improvements in rendering quality and efficiency.
  • Item
    Real-Time Per-Garment Virtual Try-On with Temporal Consistency for Loose-Fitting Garments
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Wu, Zaiqiang; Shen, I-Chao; Igarashi, Takeo; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Per-garment virtual try-on methods collect garment-specific datasets and train networks tailored to each garment to achieve superior results. However, these approaches often struggle with loose-fitting garments due to two key limitations: (1) They rely on human body semantic maps to align garments with the body, but these maps become unreliable when body contours are obscured by loose-fitting garments, resulting in degraded outcomes; (2) They train garment synthesis networks on a per-frame basis without utilizing temporal information, leading to noticeable jittering artifacts. To address the first limitation, we propose a two-stage approach for robust semantic map estimation. First, we extract a garment-invariant representation from the raw input image. This representation is then passed through an auxiliary network to estimate the semantic map. This enhances the robustness of semantic map estimation under loose-fitting garments during garment-specific dataset generation. To address the second limitation, we introduce a recurrent garment synthesis framework that incorporates temporal dependencies to improve frame-to-frame coherence while maintaining real-time performance. We conducted qualitative and quantitative evaluations to demonstrate that our method outperforms existing approaches in both image quality and temporal coherence. Ablation studies further validate the effectiveness of the garment-invariant representation and the recurrent synthesis framework.
  • Item
    LayoutRectifier: An Optimization-based Post-processing for Graphic Design Layout Generation
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Shen, I-Chao; Shamir, Ariel; Igarashi, Takeo; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Recent deep learning methods can generate diverse graphic design layouts efficiently. However, these methods often create layouts with flaws, such as misalignment, unwanted overlaps, and unsatisfied containment. To tackle this issue, we propose an optimization-based method called LayoutRectifier, which gracefully rectifies auto-generated graphic design layouts to reduce these flaws while minimizing deviation from the generated layout. The core of our method is a two-stage optimization. First, we utilize grid systems, which professional designers commonly use to organize elements, to mitigate misalignments through discrete search. Second, we introduce a novel box containment function designed to adjust the positions and sizes of the layout elements, preventing unwanted overlapping and promoting desired containment. We evaluate our method on content-agnostic and content-aware layout generation tasks and achieve better-quality layouts that are more suitable for downstream graphic design tasks. Our method complements learning-based layout generation methods and does not require additional training.
  • Item
    FlowCapX: Physics-Grounded Flow Capture with Long-Term Consistency
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Tao, Ningxiao; Zhang, Liru; Ni, Xingyu; Chu, Mengyu; Chen, Baoquan; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    We present FlowCapX, a physics-enhanced framework for flow reconstruction from sparse video inputs, addressing the challenge of jointly optimizing complex physical constraints and sparse observational data over long time horizons. Existing methods often struggle to capture turbulent motion while maintaining physical consistency, limiting reconstruction quality and downstream tasks. Focusing on velocity inference, our approach introduces a hybrid framework that strategically separates representation and supervision across spatial scales. At the coarse level, we resolve sparse-view ambiguities via a novel optimization strategy that aligns long-term observation with physics-grounded velocity fields. By emphasizing vorticity-based physical constraints, our method enhances physical fidelity and improves optimization stability. At the fine level, we prioritize observational fidelity to preserve critical turbulent structures. Extensive experiments demonstrate state-of-the-art velocity reconstruction, enabling velocity-aware downstream tasks, e.g., accurate flow analysis, scene augmentation with tracer visualization and re-simulation. Our implementation is released at https://github.com/taoningxiao/FlowCapX.git.
  • Item
    Geometric Integration for Neural Control Variates
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Meister, Daniel; Harada, Takahiro; Christie, Marc; Pietroni, Nico; Wang, Yu-Shuen
    Control variates are a variance-reduction technique for Monte Carlo integration. The principle involves approximating the integrand by a function that can be analytically integrated, and integrating using the Monte Carlo method only the residual difference between the integrand and the approximation, to obtain an unbiased estimate. Neural networks are universal approximators that could potentially be used as a control variate. However, the challenge lies in the analytic integration, which is not possible in general. In this manuscript, we study one of the simplest neural network models, the multilayered perceptron (MLP) with continuous piecewise linear activation functions, and its possible analytic integration. We propose an integration method based on integration domain subdivision, employing techniques from computational geometry to solve this problem in 2D. We demonstrate that an MLP can be used as a control variate in combination with our integration method, showing applications in the light transport simulation.