Chemistruck, MikeAllen, AndrewSnyder, JohnRaghuvanshi, NikunjNarain, Rahul and Neff, Michael and Zordan, Victor2022-02-072022-02-0720212577-6193https://doi.org/10.1145/3480139https://diglib.eg.org:443/handle/10.1145/3480139We model acoustic perception in AI agents efficiently within complex scenes with many sound events. The key idea is to employ perceptual parameters that capture how each sound event propagates through the scene to the agent's location. This naturally conforms virtual perception to human. We propose a simplified auditory masking model that limits localization capability in the presence of distracting sounds. We show that anisotropic reflections as well as the initial sound serve as useful localization cues. Our system is simple, fast, and modular and obtains natural results in our tests, letting agents navigate through passageways and portals by sound alone, and anticipate or track occluded but audible targets. Source code is provided.Computing methodologiesPhysical simulationVirtual realityApplied computingSound and music computingacousticsperceptionmaskinglocalizationsound propagationvirtual agentsgame AINPC AIEfficient acoustic perception for virtual AI agents10.1145/3480139