RigidFusion: RGB-D Scene Reconstruction with Rigidly-moving Objects

dc.contributor.authorWong, Yu-Shiangen_US
dc.contributor.authorLi, Changjianen_US
dc.contributor.authorNießner, Matthiasen_US
dc.contributor.authorMitra, Niloy J.en_US
dc.contributor.editorMitra, Niloy and Viola, Ivanen_US
dc.date.accessioned2021-04-09T08:01:56Z
dc.date.available2021-04-09T08:01:56Z
dc.date.issued2021
dc.description.abstractAlthough surface reconstruction from depth data has made significant advances in the recent years, handling changing environments remains a major challenge. This is unsatisfactory, as humans regularly move objects in their environments. Existing solutions focus on a restricted set of objects (e.g., those detected by semantic classifiers) possibly with template meshes, assume static camera, or mark objects touched by humans as moving. We remove these assumptions by introducing RigidFusion. Our core idea is a novel asynchronous moving-object detection method, combined with a modified volumetric fusion. This is achieved by a model-to-frame TSDF decomposition leveraging free-space carving of tracked depth values of the current frame with respect to the background model during run-time. As output, we produce separate volumetric reconstructions for the background and each moving object in the scene, along with its trajectory over time. Our method does not rely on the object priors (e.g., semantic labels or pre-scanned meshes) and is insensitive to the motion residuals between objects and the camera. In comparison to state-of-the-art methods (e.g., Co-Fusion, MaskFusion), we handle significantly more challenging reconstruction scenarios involving moving camera and improve moving-object detection (26% on the miss-detection ratio), tracking (27% on MOTA), and reconstruction (3% on the reconstruction F1) on the synthetic dataset. Please refer the supplementary and the project website for the video demonstration (geometry.cs.ucl.ac.uk/projects/2021/rigidfusion).en_US
dc.description.number2
dc.description.sectionheadersAnalyzing and Integrating RGB-D Images
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume40
dc.identifier.doi10.1111/cgf.142651
dc.identifier.issn1467-8659
dc.identifier.pages511-522
dc.identifier.urihttps://doi.org/10.1111/cgf.142651
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf142651
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectComputing methodologies
dc.subjectReconstruction
dc.subjectTracking
dc.subjectVideo segmentation
dc.subjectImage segmentation
dc.titleRigidFusion: RGB-D Scene Reconstruction with Rigidly-moving Objectsen_US
Files
Original bundle
Now showing 1 - 3 of 3
Loading...
Thumbnail Image
Name:
v40i2pp511-522.pdf
Size:
6.44 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
rigidfusion_supplementary.pdf
Size:
2.3 MB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
rigidfusion_video.mp4
Size:
84.64 MB
Format:
Unknown data format
Collections