Resources
Authors & Affiliations
Marita Metzler, Christian Klaes
Abstract
Robotic-assisted rehabilitation therapies play a crucial role in improving motor recovery by providing precise, repeatable, and intensive therapeutic interventions that traditional methods often lack [1,2,3]. Nonetheless, their adaptability is restricted due to a lack of comprehensive data on patients' biophysical signals and behavior [4,5,6]. Developing intelligent control mechanisms for rehabilitation devices is an essential step to enable personalized and responsive training for patients.
This research project introduces the development of an innovative multimodal decoder for hybrid Brain-Computer Interfaces (BCI), utilizing Electroencephalography (EEG), Electromyography (EMG), 3D movement trajectories, and eye-tracking data. The decoder learns the complex interactions among these varied sensor modalities to improve the estimation of movement intention, detection of movement onset, and classification of movement tasks. The study focuses on three specific movement tasks: wrist rotation, reaching, and a simplified activity of daily living—drinking
Utilizing advanced deep learning techniques, particularly models with attention mechanisms, the decoder is trained to capture both spatial and temporal relationships within the multimodal data. This study seeks to address the potential of this multimodal decoder for real-time control of rehabilitation devices, such as exoskeletons, offering adaptive control capabilities tailored to the patient's motor abilities.
Our findings suggest that the integration of EEG, EMG, 3D movement, and eye data can significantly improve the accuracy of movement intention detection and task estimation. This advancement holds promise for enhancing the effectiveness of rehabilitation devices, enabling more personalized and context-related movement support for individuals with motor impairments.