Mühlenbrock, AndreWeller, ReneZachmann, GabrielHasegawa, ShoichiSakata, NobuchikaSundstedt, Veronica2024-11-292024-11-292024978-3-03868-245-51727-530Xhttps://doi.org/10.2312/egve.20241366https://diglib.eg.org/handle/10.2312/egve20241366Traditional techniques for rendering continuous surfaces from dynamic, noisy point clouds using multi-camera setups often suffer from disruptive artifacts in overlapping areas, similar to z-fighting. We introduce BlendPCR, an advanced rendering technique that effectively addresses these artifacts through a dual approach of point cloud processing and screen space blending. Additionally, we present a UV coordinate encoding scheme to enable high-resolution texture mapping via standard camera SDKs. We demonstrate that our approach offers superior visual rendering quality over traditional splat and mesh-based methods and exhibits no artifacts in those overlapping areas, which still occur in leading-edge NeRF and Gaussian Splat based approaches like Pointersect and P2ENet. In practical tests with seven Microsoft Azure Kinects, processing, including uploading the point clouds to GPU, requires only 13.8 ms (when using one color per point) or 29.2 ms (using high-resolution color textures), and rendering at a resolution of 3580 x 2066 takes just 3.2 ms, proving its suitability for real-time VR applications.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Rendering; Virtual reality; Point-based models; Mesh geometry modelsComputing methodologies → RenderingVirtual realityPointbased modelsMesh geometry modelsBlendPCR: Seamless and Efficient Rendering of Dynamic Point Clouds captured by Multiple RGB-D Cameras10.2312/egve.2024136610 pages