Visual imitation learning from one-shot demonstration for multi-step robot pick and place tasks

基于单次演示的视觉模仿学习在多步骤机器人抓取和放置任务中的应用

阅读:1

Abstract

Imitation learning provides an intuitive approach for robot programming by enabling robots to learn directly from human demonstrations. While recent visual imitation learning methods have shown promise, they often depend on large datasets, which limits their applicability in manufacturing scenarios where tasks and objects are highly specialized. This paper proposes a one-shot visual imitation learning framework that allows robots to acquire multi-step pick & place tasks from a single video demonstration. The framework integrates hand detection, object detection, trajectory segmentation, and skill learning through Dynamic Movement Primitives (DMPs). Hand trajectories are mapped to the robot's end-effector, enabling the system to generalize to new object positions while significantly reducing data requirements. The approach is evaluated in simulation and achieves reliable reproduction of multi-step tasks. These results demonstrate the potential of one-shot visual imitation learning to reduce programming complexity and increase flexibility for industrial robot applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。