HandFlow: Quantifying View-Dependent 3D Ambiguity in Two-Hand Reconstruction with Normalizing Flow

Abstract
Reconstructing two-hand interactions from a single image is a challenging problem due to ambiguities that stem from projective geometry and heavy occlusions. Existing methods are designed to estimate only a single pose, despite the fact that there exist other valid reconstructions that fit the image evidence equally well. In this paper we propose to address this issue by explicitly modeling the distribution of plausible reconstructions in a conditional normalizing flow framework. This allows us to directly supervise the posterior distribution through a novel determinant magnitude regularization, which is key to varied 3D hand pose samples that project well into the input image. We also demonstrate that metrics commonly used to assess reconstruction quality are insufficient to evaluate pose predictions under such severe ambiguity. To address this, we release the first dataset with multiple plausible annotations per image called MultiHands. The additional annotations enable us to evaluate the estimated distribution using the maximum mean discrepancy metric. Through this, we demonstrate the quality of our probabilistic reconstruction and show that explicit ambiguity modeling is better-suited for this challenging problem.
Description

CCS Concepts: Computing methodologies --> Tracking; Computer vision; Neural networks

        
@inproceedings{
10.2312:vmv.20221209
, booktitle = {
Vision, Modeling, and Visualization
}, editor = {
Bender, Jan
and
Botsch, Mario
and
Keim, Daniel A.
}, title = {{
HandFlow: Quantifying View-Dependent 3D Ambiguity in Two-Hand Reconstruction with Normalizing Flow
}}, author = {
Wang, Jiayi
and
Luvizon, Diogo
and
Mueller, Franziska
and
Bernard, Florian
and
Kortylewski, Adam
and
Casas, Dan
and
Theobalt, Christian
}, year = {
2022
}, publisher = {
The Eurographics Association
}, ISBN = {
978-3-03868-189-2
}, DOI = {
10.2312/vmv.20221209
} }
Citation
Collections