TCN-LSTM Fusion for Lower Limb Joint Angle Prediction Under Multimodal Signals
DOI:
https://doi.org/10.3991/ijoe.v21i13.56587Keywords:
TCN, LSTM, Multimodal, Joint angle prediction, Lower limb rehabilitation robotAbstract
This study addresses the limitations of single-modal input and insufficient temporal feature extraction in traditional deep learning models by proposing a multimodal framework for lower-limb joint-angle prediction. Using Inertial Measurement Unit (IMU), Surface Electromyography (sEMG), and goniometer data as inputs, the model cascades a temporal convolutional network (TCN) with a long short-term memory (LSTM) network. The TCN first extracts complex spatial features from the multimodal signals, and this is followed by the LSTM capturing their temporal dependencies to map input sequences to multiple joint angles. Experimental results demonstrate that, compared to TCN, LSTM, Bi-LSTM, and GRU benchmarks, the proposed TCN-LSTM model reduces average RMSE by 61.54%, 21.13%, 21.45%, and 16.37%, respectively, and reduces average MAE by 13.48%, 7.25%, 2.97%, and 6.15%, respectively. At the same time, it improves R² by 3.73%, 1.08%, 1.00%, and 0.84%, respectively. Overall, the TCN-LSTM model delivers superior prediction accuracy, demonstrating significant practical value in the control of lower-limb rehabilitation robots.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Quan Chen, Yongxian Song, Qi Zhang, Yan Yan, Yuanlin Fang, Xuenian Zheng

This work is licensed under a Creative Commons Attribution 4.0 International License.

