Mammou, KonstantinaMania, KaterinaGünther, TobiasMontazeri, Zahra2025-05-092025-05-092025978-3-03868-269-11017-4656https://doi.org/10.2312/egp.20251020https://diglib.eg.org/handle/10.2312/egp20251020In this work, we present a gaze prediction model for Virtual Reality task-oriented environments. Unlike past work which focuses on gaze prediction for specific tasks, we investigate the role and potential of temporal continuity in enabling accurate predictions in diverse task categories. The model reduces input complexity while maintaining high prediction accuracy. Evaluated on the OpenNEEDS dataset, it significantly outperforms baseline methods. The model demonstrates strong potential for integration into gaze-based VR interactions and foveated rendering pipelines. Future work will focus on runtime optimization and expanding evaluation across diverse VR scenarios.Attribution 4.0 International LicenseCCS Concepts: Human-centered computing → Virtual reality; Computing methodologies → Neural networks; RenderingHuman centered computing → Virtual realityComputing methodologies → Neural networksRenderingA Gaze Prediction Model for Task-Oriented Virtual Reality10.2312/egp.202510202 pages