Tsai, Hao-MingWong, Sai-KeungChristie, MarcHan, Ping-HsuanLin, Shih-SyunPietroni, NicoSchneider, TeseoTsai, Hsin-RueyWang, Yu-ShuenZhang, Eugene2025-10-072025-10-072025978-3-03868-295-0https://doi.org/10.2312/pg.20251261https://diglib.eg.org/handle/10.2312/pg20251261This paper introduces a deep reinforcement learning-based system for ego vehicle control, enabling interaction with dynamic objects like pedestrians and animals. These objects display varied crossing behaviors, including sudden stops and directional shifts. The system uses a perception module to identify road structures, key pedestrians, inner wheel difference zones, and object movements. This allows the vehicle to make context-aware decisions, such as yielding, turning, or maintaining speed. The training process includes reward terms for speed, time, time-to-collision, and cornering to refine policy learning. Experiments show ego vehicles can adjust their behavior, such as decelerating or yielding, to avoid collisions. Ablation studies highlighted the importance of specific reward terms and state components. Animation results show that ego vehicles could safely interact with pedestrians or animals that exhibited sudden acceleration, mid-crossing directional changes, and abrupt stops.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Physical simulation; Collision detection; Reinforcement learningComputing methodologies → Physical simulationCollision detectionReinforcement learningAnimating Vehicles Risk-Aware Interaction with Pedestrians Using Deep Reinforcement Learning10.2312/pg.2025126112 pages