Real-Time Learning of Gesture Transitions for Enhanced Biosignal-Based Control

An animated image showing a robotic arm being controlled in real-time through muscle activity. The muscle movements of a person's arm are being monitored and translated into corresponding movements in the robotic arm, demonstrating a synchronized and smooth operation.

Background
In the realm of human-robot interaction, biosignals, including muscle activity and actigraphy, offer an intuitive medium for humans to interface with robotic systems. For real-time control, maintaining a latency within the range of 150 to 250ms is pivotal to preserve the utility of the interface. While current, state-of-the-art algorithms achieve excellent performance in offline gesture classification, the transient nature of biosignals presents challenges in sustaining performance over time, necessitating frequent, time-intensive recalibrations.

Problem Statement
Traditional classification models center around identifying static user gestures (e.g., a closed fist or open hand) to control robotic mechanisms (e.g., closing or opening a robotic arm end-effector). This method neglects the dynamics of gesture transitions, inherently increasing system latency since the user must achieve a stable gesture state before a new command can be issued.

Problem Statement
This project proposes to create a model that learns and predicts gesture transitions in real-time, shifting the focus from merely recognizing stable gestures to interpreting and anticipating dynamic transitions between them (e.g., moving from Gesture A to Gesture B). This approach is crucial due to the impracticality of manually labeling transitions between each possible gesture combination and the persistent need for recalibration. Introducing pseudo-labeled transition examples into the target dataset aims to enhance the network's ability to swiftly detect transitions, thus mitigating system latency. Additionally, this latency reduction could permit the use of larger data window sizes, potentially augmenting overall system performance while maintaining minimized latency.

 

Problem Statement

  • Dynamic Gesture Learning: Prioritize deciphering biosignal patterns during gesture transitions, enabling the model to predict upcoming gestures in their initial phases rather than after stabilization.
  • Latency Reduction: By predicting gesture transitions, the model intends to facilitate more immediate control responses, potentially allowing for larger window sizes without compromising, and possibly even reducing, latency, and improving overall system performance.
  • Reinforcement Learning: Employ reinforcement learning to iteratively enhance the model’s comprehension and prediction of gesture transitions as it gathers more interaction data, guaranteeing perpetual adaptation to the user’s gesture dynamics.
  • User and System Co-Adaptation: Engage the user and system in a symbiotic learning experience, where the system adapts to the user's specific gesture transitions and the user incrementally adjusts their gestures in response to system feedback, nurturing enhanced, intuitive control.

Expected Outcomes

  • Enhanced Control: Empower users with more fluid and responsive control, particularly during dynamic interactions, by diminishing latency through predictive transition modeling.
  • Reduced Recalibration: Minimize the need for frequent recalibrations by ensuring that the model perpetually learns and adapts to the user’s evolving gesture dynamics and transitions.
  • Improved Performance: Enhance the model’s performance by allowing larger data window sizes due to the focus on transitions, providing a richer data context for gesture prediction, while still maintaining reduced latency.

References:

[1] Scheme, E. and Englehart, K., 2011. Electromyogram pattern recognition for control of powered upper-limb prostheses: state of the art and challenges for clinical use. Journal of Rehabilitation Research & Development, 48(6).

[2] Zhai, X., Jelfs, B., Chan, R.H. and Tin, C., 2017. Self-recalibrating surface EMG pattern recognition for neuroprosthesis control based on convolutional neural network. Frontiers in neuroscience, 11, p.379.

[3] Côté-Allard, U., Gagnon-Turcotte, G., Phinyomark, A., Glette, K., Scheme, E. J., Laviolette, F., & Gosselin, B. (2020). Unsupervised domain adversarial self-calibration for electromyography-based gesture recognition. IEEE Access, 8, 177941-177955.

Published Oct. 8, 2023 3:22 PM - Last modified Oct. 31, 2023 10:26 AM

Supervisor(s)

Scope (credits)

60