Bazargani, Jalal SafariSadeghi-Niaraki, AbolghasemChoi, Soo-MiChaine, RaphaƫlleDeng, ZhigangKim, Min H.2023-10-092023-10-092023978-3-03868-234-9https://doi.org/10.2312/pg.20231277https://diglib.eg.org:443/handle/10.2312/pg20231277Among the sources of information about emotions, body movements, recognized as ''kinesics'' in non-verbal communication, have received limited attention. This research gap suggests the need to investigate suitable body movement-based approaches for making communication in virtual environments more realistic. Therefore, this study proposes an automated emotion recognition approach suitable for use in virtual environments. This study consists of two pipelines for emotion recognition. For the first pipeline, i.e., upper-body keypoint-based recognition, the HEROES video dataset was employed to train a bidirectional long short-term memory model using upper-body keypoints capable of predicting four discrete emotions: boredom, disgust, happiness, and interest, achieving an accuracy of 84%. For the second pipeline, i.e., wrist-movement-based recognition, a random forest model was trained based on 17 features computed from acceleration data of wrist movements along each axis. The model achieved an accuracy of 63% in distinguishing three discrete emotions: sadness, neutrality, and happiness. The findings suggest that the proposed approach is a noticeable step toward automated emotion recognition, without using any additional sensors other than the head mounted display (HMD).Attribution 4.0 International LicenseCCS Concepts: Human-centered computing -> Human computer interaction (HCI)Human centered computingHuman computer interaction (HCI)Avatar Emotion Recognition using Non-verbal Communication10.2312/pg.20231277103-1042 pages