Sequential Human Assembly and Disassembly Motions in Human-Robot Coexisting Environments

人机共存环境下的顺序性人体组装和拆卸动作

阅读:2

Abstract

As human-robot systems and autonomous robots become increasingly prevalent, the need for task-oriented datasets to study human behaviors in shared spaces has grown significantly. We present a novel dataset focusing on sequential human assembly and disassembly motions in human-robot coexisting environments. It contains over 10,000 samples recorded from multi-view camera setups, each comprising synchronized RGB videos and 2D and 3D human skeletons. Data were collected from 33 participants with diverse physical characteristics and behavior preferences. This dataset highlights practical challenges such as partial occlusions, similar repetitive motions, and varying human behaviors, which are often overlooked in existing datasets and research. Technical validation using benchmarking with state-of-the-art deep learning models reveals significant potential in using this dataset for practical applications. To support diverse research applications, this dataset provides raw and processed data with detailed annotations, including precise timestamps, procedure annotations, and Python codes for reproducibility. It aims to advance research in human motion prediction, task-oriented robotic sequential decision-making, motion and task planning of autonomous robots, and human-robot collaborative policies.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。