Ruprecht, IrenaMichelic, FlorianPreiner, ReinholdComino Trinidad, MarcMancinelli, ClaudioMaggioli, FilippoRomanengo, ChiaraCabiddu, DanielaGiorgi, Daniela2025-11-212025-11-212025978-3-03868-296-72617-4855https://doi.org/10.2312/stag.20251341https://diglib.eg.org/handle/10.2312/stag20251341We present a real-time crowd simulation approach based on reinforcement learning (RL), addressing congestion prevention in confined spaces. We learn a local navigation policy that uses compact, fast-to-compute per-agent observations of a small set of neighbors, including their desired directions. Alongside goal progress and inter-agent spacing, we reward agents for waiting when neighbors ahead pursue similar goals. This formulation fosters global self-organization from purely local interactions. Preliminary results show reduced congestion and consistent goal attainment for large crowds with hundreds of agents.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Real-time simulation; Multi-agent reinforcement learningComputing methodologies → Realtime simulationMultiagent reinforcement learningLearning to Wait: Preventing Global Congestion from Local Observations in Real-Time Crowd Navigation10.2312/stag.202513412 pages