Tran, Thi Thanh HoaPeillard, EtienneWalsh, JamesMoreau, GuillaumeThomas, BruceJorge, Joaquim A.Sakata, Nobuchika2025-11-262025-11-262025978-3-03868-278-31727-530Xhttps://doi.org/10.2312/egve.20251351https://diglib.eg.org/handle/10.2312/egve20251351For autonomous vehicles (AVs) to be widely accepted, users must not only feel safe but also understand how the vehicle perceives and responds to its environment. Augmented Reality (AR) enables real-time, intuitive communication of such information, helping foster trust and enhance situation awareness (SA). This paper presents the results of three online user studies that investigate the design of different AR visualization strategies in simulated AV environments. Although the studies used prerecorded videos, they were designed to simulate ecologically realistic driving scenarios. Study 1 evaluates six types of highlight visualizations (bounding box, spotlight, point arrow, zoom, semantic segmentation, and baseline) across five driving scenarios varying in complexity and visibility. The results show that highlight effectiveness is scenario-dependent, with bounding boxes and spotlights being more effective in occluded or ambiguous conditions. Study 2 explores predictive visualizations, comparing single vs. multiple predicted paths and goals to communicate future trajectories. Findings indicate that single-path predictions are most effective for enhancing trust and safety, while multi-goal visualizations are perceived as less clear and less helpful. Study 3 examines the impact of spatial anchoring in AR by comparing screen-fixed and world-fixed presentations of timeto- contact information. Results demonstrate that world-fixed visualizations significantly improve trust, perceived safety, and object detectability compared to screen-fixed displays. Together, these studies provide key insights into when, what, and how AR visualizations should be presented in AVs to effectively support passenger understanding. The findings inform the design of adaptive AR interfaces that tailor visual feedback based on scenario complexity, uncertainty, and environmental context.Attribution 4.0 International LicenseCCS Concepts: Human-centered computing → Mixed / augmented realityHuman centered computing → Mixed / augmented realityTrust and Safety in Autonomous Vehicles: Evaluating Contextual Visualizations for Highlighting, Prediction, and Anchoring10.2312/egve.2025135110 pages