Abstract
With the advancement of tactile sensors, researchers increasingly integrate tactile perception into robotics, but only for tasks such as object reconstruction, classification, recognition, and grasp state assessment. In this paper, we rethink the relationship between visual and tactile perception and propose a novel robotic grasping detection method based on visual-tactile perception. Initially, we construct a visual-tactile dataset containing the grasp stability for each potential grasping position. Next, we introduce a novel Grasp Stability Prediction Module (GSPM) to generate a grasp stability probability map, providing prior knowledge regarding grasp stability to the grasp detection network for each possible grasp position. Finally, the map is multiplied element-wise with the corresponding colored image and inputted into the grasp detection network. Experimental results demonstrate that our novel visual-tactile fusion method significantly enhances robotic grasping detection accuracy.