Khokhar, AdilBorst, Christoph W.Hideaki UchiyamaJean-Marie Normand2022-11-292022-11-292022978-3-03868-179-31727-530Xhttps://doi.org/10.2312/egve.20221279https://diglib.eg.org:443/handle/10.2312/egve20221279Distractions can cause students to miss out on critical information in educational Virtual Reality (VR) environments. Our work uses generalized features (angular velocities, positional velocities, pupil diameter, and eye openness) extracted from VR headset sensor data (head-tracking, hand-tracking, and eye-tracking) to train a deep CNN-LSTM classifier to detect distractors in our educational VR environment. We present preliminary results demonstrating a 94.93% accuracy for our classifier, an improvement in both the accuracy and generality of features used over two recent approaches. We believe that our work can be used to improve educational VR by providing a more accurate and generalizable approach for distractor detection.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies -> Machine learning; Human-centered computing -> Virtual realityComputing methodologiesMachine learningHuman centered computingVirtual realityTowards Improving Educational Virtual Reality by Classifying Distraction using Deep Learning10.2312/egve.2022127985-906 pages