Smartphone-Based Interpretable Machine Learning for Classifying Single-Leg Squat Performance Using Trunk, Pelvic, and Knee Kinematics: Cross-Sectional Study

基于智能手机的可解释机器学习方法,利用躯干、骨盆和膝关节运动学数据对单腿深蹲表现进行分类:横断面研究

阅读:1

Abstract

BACKGROUND: Single-leg squat (SLS) performance is widely used to screen functional movement quality, but practical assessment often relies on expert visual grading or laboratory-based motion capture. In addition, conventional SLS criteria mainly focus on isolated joint deviations and may overlook coordination-related, multisegment movement patterns that characterize impaired performance. OBJECTIVE: This study aimed to examine the feasibility of an interpretable machine learning framework for classifying SLS performance into 3 levels (good, moderate, and poor) from single-smartphone, frontal-view videos based on trunk, pelvic, and knee kinematics, and to evaluate coordination-informed features and model explainability using Shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME). METHODS: A dataset of frontal-view SLS videos was labeled by physiotherapists into 3 functional categories (good, moderate, and poor). Videos were processed using 2D pose estimation, and models were trained on 17 engineered kinematic features derived from trunk, pelvic, and knee angles. Following the feature selection, 7 classifiers were trained and evaluated using the 8 selected features with stratified 5-fold cross-validation and a held-out test set. SHAP and LIME were applied for global and local interpretability. RESULTS: On the held-out test set, adaptive boosting classified SLS performance with an accuracy of 0.84, an F1-score of 0.85, and an area under curve of 0.92. SHAP indicated that the summated angle (trunk + pelvis + knee), coordination-related features (knee × trunk interaction and knee-to-trunk ratio), and knee angle were key contributors to model predictions. LIME provided instance-level explanations that helped interpret individual classifications and decision boundaries. CONCLUSIONS: This study presents an interpretable machine learning framework for classifying SLS performance into 3 levels using frontal-view videos acquired with a single smartphone. By leveraging coordination-informed engineered features and explainable artificial intelligence, the framework enables transparent interpretation of movement performance beyond isolated joint deviations. The proposed workflow uses smartphones for standardized video acquisition, while performance screening is achieved through machine learning. Given its lightweight feature design, this framework has potential for future implementation on modern smartphones and may support rehabilitation planning and injury-prevention strategies in sports and clinical settings.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。