Czajkowski, AndrzejKowal, MarekGreszczyński, RafaelGarro, ValeriaYoung, GarethElwardy, Majed2025-11-262025-11-262025978-3-03868-279-01727-530Xhttps://doi.org/10.2312/egve.20251364https://diglib.eg.org/handle/10.2312/egve20251364This paper deals with preliminary research on segmenting 3D space captured by the depth sensor integrated into the HTC Vive XR Elite headset. Obtained cloud of 3D points was employed as the basis for a segmentation algorithm that identifies individual objects by detecting planes and grouping them into bounding boxes. Since spatial data is not available in the socalled PCVR mode, all computations had to be performed directly on the head-mounted display (HMD) headset, which is heavily constrained in terms of performance. The proposed approach provides a practical foundation for replacing detected objects with virtual counterparts (overlaying), while meeting computational and energy efficiency requirements for prolonged headset operation.Attribution 4.0 International LicenseCCS Concepts: Human-centered computing → Mixed / augmented reality; Computing methodologies → Mesh modelsHuman centered computing → Mixed / augmented realityComputing methodologies → Mesh modelsOn-Device 3D Point Cloud Segmentation for Mixed Reality Applications10.2312/egve.202513642 pages