Comparative Benchmark of Sampling-Based and DRL Motion Planning Methods for Industrial Robotic Arms

工业机器人手臂基于采样和深度强化学习运动规划方法的比较基准测试

阅读:1

Abstract

This study presents a comprehensive comparison between classical sampling-based motion planners from the Open Motion Planning Library (OMPL) and a learning-based planner based on Soft Actor-Critic (SAC) for motion planning in industrial robotic arms. Using a UR3e robot equipped with an RG2 gripper, we constructed a large-scale dataset of over 100,000 collision-free trajectories generated with MoveIt-integrated OMPL planners. These trajectories were used to train a DRL agent via curriculum learning and expert demonstrations. Both approaches were evaluated on key metrics such as planning time, success rate, and trajectory smoothness. Results show that the DRL-based planner achieves higher success rates and significantly lower planning times, producing more compact and deterministic trajectories. Time-optimal parameterization using TOPPRA ensured the dynamic feasibility of all trajectories. While classical planners retain advantages in zero-shot adaptability and environmental generality, our findings highlight the potential of DRL for real-time and high-throughput motion planning in industrial contexts. This work provides practical insights into the trade-offs between traditional and learning-based planning paradigms, paving the way for hybrid architectures that combine their strengths.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。