Performance of deep-learning models incorporating knee alignment information for predicting ground reaction force during walking

结合膝关节排列信息的深度学习模型在预测步行过程中地面反作用力方面的性能

阅读:1

Abstract

BACKGROUND: Wearable sensors combined with deep-learning models are increasingly being used to predict biomechanical variables. Researchers have focused on either simple neural networks or complex pretrained models with multiple layers. In addition, studies have rarely integrated knee alignment information or the side affected by injury as features to improve model predictions. In this study, we compared the performance of selected model architectures, including complex pretrained models, in predicting three-dimensional (3D) ground reaction force (GRF) data during level walking by using data obtained from motion capture systems and wearable accelerometers. RESULTS: Ten deep-learning models for predicting the 3D GRF were developed using motion capture and accelerometer data with or without subject-specific features. Incorporating subject-specific features improved prediction accuracy for all models except the long short-term memory (LSTM) model. A two-dimensional (2D)-CNN-LSTM hybrid model achieved the best results. Established models, such as ResNet50 and Inception, performed better when trained with pretrained ImageNet weights and subject-specific features, underscoring the value of pretrained knowledge and subject-specific information for improving accuracy. However, these models did not outperform the custom hybrid models in predicting time-series 3D GRF data, indicating that larger models do not necessarily perform better for time-series applications but do always have greater computational demands. CONCLUSION: Incorporating subject-specific features, such as alignment information, enhanced the accuracy of GRF predictions during walking. Complex pretrained models were outperformed by custom hybrid models for time-series 3D GRF prediction during walking. Custom models with lower computational demands and using alignment features are a more efficient and effective choice for applications requiring accurate and resource-efficient predictions.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。