Towards video-based injury risk assessment: predicting lifting loads from body pose trajectories

基于视频的损伤风险评估:根据身体姿势轨迹预测举重负荷

阅读:1

Abstract

Manual material handling tasks, such as lifting and lowering, are ubiquitous across industry sectors. Overexertion during these tasks is among the leading causes of workplace injuries. Previous studies have shown that lifting load is a key factor in determining the risk of injury. However, existing methods for measuring the lifting load often rely on manual measurements, sensor fusion, or other techniques that are difficult to scale in practice. In this study, we present a vision-based approach to automatically predict lifting load by analyzing human body pose trajectories extracted from video alone. Specifically, our method employs person detection, visual tracking, and human body pose estimation to extract pose trajectories and their kinematic features, which are then used to train a Transformer model for load prediction. To evaluate our method, we conducted a human subjects study of 19 participants performing various lifting and lowering tasks with varying postures. Our method achieved an average accuracy of 74.8% to distinguish between light vs. heavy objects, and an average accuracy of 50.8% to identify three levels of lifting loads (light, medium, heavy) across lifting and lowering tasks. These results demonstrate a first step towards computer vision based solutions for automatic, noninvasive, scalable injury risk assessment for manual material handling tasks.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。