Gait recognition using spatio-temporal representation fusion learning network with IMU-based skeleton graph and body partition strategy

基于IMU骨架图和身体分割策略的时空表示融合学习网络步态识别

阅读:2

Abstract

The precise recognition of human lower limb movements based on wearable sensors is very important for human-computer interaction. However, the existing methods tend to ignore the dynamic spatial information in the process of executing human lower limb movements, leading to challenges such as reduced decoding accuracy and limited robustness. In this paper, we construct skeleton graph data based on inertial measurement unit (IMU) sensors. Also, a two-branch deep learning model, termed TCNN-MGCHN, is proposed to mine meaningful spatial and temporal feature representations from IMU-based skeleton graph data. Firstly, a temporal convolutional module (consisting of a multi-scale convolutional sub-module and an attention sub-module) is developed to extract temporal feature information with highly discriminative power. Secondly, a multi-scale graph convolutional module and a spatial graph edges' importance weight assignment method based on body partitioning strategy are proposed to obtain intrinsic spatial feature information between different skeleton nodes. Finally, the fused spatio-temporal features are passed into the classification module to obtain the predicted gait movements and sub-phases. Extensive comparison and ablation studies are conducted on our self-constructed human lower limb movement dataset. The results demonstrate that TCNN-MGCHN delivers superior classification performance compared to the mainstream methods. This study can provide a benchmark for IMU-based human lower limb movement recognition and related deep-learning modeling works.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。