Monocular suture needle pose detection using synthetic data augmented convolutional neural network

基于合成数据增强卷积神经网络的单目缝合针姿态检测

阅读:1

Abstract

PURPOSE: Robotic microsurgery enhances the dexterity and stability of the surgeon to perform precise and delicate surgical procedures at microscopic level. Accurate needle pose estimation is critical for robotic micro-suturing, enabling optimized insertion trajectories and facilitating autonomous control. However, accurately estimating the pose of a needle during manipulation, particularly under monocular vision, remains a challenge. This study proposes a convolutional neural network-based method to estimate the pose of a suture needle from monocular images. METHODS: The 3D pose of the needle is estimated using keypoints information from 2D images. A convolutional neural network was trained to estimate the positions of keypoints on the needle, specifically the tip, middle and end point. A hybrid dataset comprising images from both real-world and synthetic simulated environments was developed to train the model. Subsequently, an algorithm was designed to estimate the 3D positions of these keypoints. The 2D keypoint detection and 3D orientation estimation were evaluated by translation and orientation error metrics, respectively. RESULTS: Experiments conducted on synthetic data showed that the average translation error of tip point, middle point and end point being 0.107 mm, 0.118 mm and 0.098 mm, and the average orientation angular error was 12.75  ∘  for normal vector and 15.55  ∘  for direction vector. When evaluated on real data, the method demonstrated 2D translation errors averaging 0.047 mm, 0.052 mm and 0.049 mm for the respective keypoints, with 93.85% of detected keypoints having errors below 4 pixels. CONCLUSIONS: This study presents a CNN-based method, augmented with synthetic images, to estimate the pose of a suture needle in monocular vision. Experimental results indicate that the method effectively estimates the 2D positions and 3D orientations of the suture needle in synthetic images. The model also shows reasonable performance with real data, highlighting its promise for real-time application in robotic microsurgery.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。