Muehlenbrock, AndreWeller, ReneZachmann, GabrielJorge, Joaquim A.Sakata, Nobuchika2025-11-262025-11-262025978-3-03868-278-31727-530Xhttps://doi.org/10.2312/egve.20251353https://diglib.eg.org/handle/10.2312/egve20251353Efficient rendering of dynamic point clouds from multiple RGB-D cameras is essential for a wide range of VR/AR applications. In this work, we introduce and leverage two key parameters in a mesh-based rendering approach and conduct a systematic study of their impact on the trade-off between rendering speed and perceptual quality. We show that both parameters enable substantial performance improvements while causing only negligible visual degradation. Across four GPU generations and multiple deployment scenarios, continuous dynamic point clouds from seven Microsoft Azure Kinects can achieve binocular rendering at triple-digit frame rates, even on mid-range GPUs. Our results provide practical guidelines for balancing visual fidelity and efficiency in real-time VR point cloud rendering, demonstrating that mesh-based approaches are a scalable and versatile solution for applications ranging from consumer headsets to large-scale projection systems.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Rendering; Virtual reality; Point-based models; Mesh geometry modelsComputing methodologies → RenderingVirtual realityPointbased modelsMesh geometry modelsBalancing Speed and Visual Fidelity of Dynamic Point Cloud Rendering in VR10.2312/egve.202513536 pages