DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction

No Thumbnail Available
Date
2024
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association
Abstract
Enabling online virtual reality (VR) users to dance and move in a way that mirrors the real-world necessitates improvements in the accuracy of predicting human motion sequences paving way for an immersive and connected experience. However, the drawbacks of latency in networked motion tracking present a critical detriment in creating a sense of complete engagement, requiring prediction for online synchronization of remote motions. To address this challenge, we propose a novel approach that leverages a synthetically generated dataset based on supervised foot anchor placement timings of rhythmic motions to ensure periodicity resulting in reduced prediction error. Specifically, our model compromises a discrete cosine transform (DCT) to encode motion, refine high frequencies and smooth motion sequences and prevent jittery motions. We introduce a feed-forward attention mechanism to learn based on dual-window pairs of 3D key points pose histories to predict future motions. Quantitative and qualitative experiments validating on the Human3.6m dataset result in observed improvements in the MPJPE evaluation metrics protocol compared with prior state-of-the-art.
Description

CCS Concepts: Computing methodologies → Machine Learning; Motion Processing; Virtual Reality

        
@inproceedings{
10.2312:cgvc.20241220
, booktitle = {
Computer Graphics and Visual Computing (CGVC)
}, editor = {
Hunter, David
and
Slingsby, Aidan
}, title = {{
DeFT-Net: Dual-Window Extended Frequency Transformer for Rhythmic Motion Prediction
}}, author = {
Ademola, Adeyemi
and
Sinclair, David
and
Koniaris, Babis
and
Hannah, Samantha
and
Mitchell, Kenny
}, year = {
2024
}, publisher = {
The Eurographics Association
}, ISBN = {
978-3-03868-249-3
}, DOI = {
10.2312/cgvc.20241220
} }
Citation