An Interactive Human-in-the-Loop Framework for Skeleton-Based Posture Recognition in Model Education.

阅读:14
作者:Shen Jing, Chen Ling, He Xiaotong, Zuo Chuanlin, Li Xiangjun, Dong Lin
This paper presents a human-in-the-loop interactive framework for skeleton-based posture recognition, designed to support model training and artistic education. A total of 4870 labeled images are used for training and validation, and 500 images are reserved for testing across five core posture categories: standing, sitting, jumping, crouching, and lying. From each image, comprehensive skeletal features are extracted, including joint coordinates, angles, limb lengths, and symmetry metrics. Multiple classification algorithms-traditional (KNN, SVM, Random Forest) and deep learning-based (LSTM, Transformer)-are compared to identify effective combinations of features and models. Experimental results show that deep learning models achieve superior accuracy on complex postures, while traditional models remain competitive with low-dimensional features. Beyond classification, the system integrates posture recognition with a visual recommendation module. Recognized poses are used to retrieve matched examples from a reference library, allowing instructors to browse and select posture suggestions for learners. This semi-automated feedback loop enhances teaching interactivity and efficiency. Among all evaluated methods, the Transformer model achieved the best accuracy of 92.7% on the dataset, demonstrating the effectiveness of our closed-loop framework in supporting pose classification and model training. The proposed framework contributes both algorithmic insights and a novel application design for posture-driven educational support systems.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。