Enhancing Robotic Grasping Detection Using Visual-Tactile Fusion Perception

利用视觉-触觉融合感知增强机器人抓取检测

阅读:1

Abstract

With the advancement of tactile sensors, researchers increasingly integrate tactile perception into robotics, but only for tasks such as object reconstruction, classification, recognition, and grasp state assessment. In this paper, we rethink the relationship between visual and tactile perception and propose a novel robotic grasping detection method based on visual-tactile perception. Initially, we construct a visual-tactile dataset containing the grasp stability for each potential grasping position. Next, we introduce a novel Grasp Stability Prediction Module (GSPM) to generate a grasp stability probability map, providing prior knowledge regarding grasp stability to the grasp detection network for each possible grasp position. Finally, the map is multiplied element-wise with the corresponding colored image and inputted into the grasp detection network. Experimental results demonstrate that our novel visual-tactile fusion method significantly enhances robotic grasping detection accuracy.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。