Autonomous countertraction for secure field of view in laparoscopic surgery using deep reinforcement learning

利用深度强化学习实现腹腔镜手术中安全视野的自主反向牵引

阅读:1

Abstract

PURPOSE: Countertraction is a vital technique in laparoscopic surgery, stretching the tissue surface for incision and dissection. Due to the technical challenges and frequency of countertraction, autonomous countertraction has the potential to significantly reduce surgeons' workload. Despite several methods proposed for automation, achieving optimal tissue visibility and tension for incision remains unrealized. Therefore, we propose a method for autonomous countertraction that enhances tissue surface planarity and visibility. METHODS: We constructed a neural network that integrates a point cloud convolutional neural network (CNN) with a deep reinforcement learning (RL) model. This network continuously controls the forceps position based on the surface shape observed by a camera and the forceps position. RL is conducted in a physical simulation environment, with verification experiments performed in both simulation and phantom environments. The evaluation was performed based on plane error, representing the average distance between the tissue surface and its least-squares plane, and angle error, indicating the angle between the tissue surface vector and the camera's optical axis vector. RESULTS: The plane error decreased under all conditions both simulation and phantom environments, with 93.3% of case showing a reduction in angle error. In simulations, the plane error decreased from 3.6 ± 1.5 mm to 1.1 ± 1.8mm , and the angle error from 29 ± 19∘ to 14 ± 13∘ . In the phantom environment, the plane error decreased from 0.96 ± 0.24 mm to 0.39 ± 0.23mm , and the angle error from 32 ± 29∘ to 17 ± 20∘ . CONCLUSION: The proposed neural network was validated in both simulation and phantom experimental settings, confirming that traction control improved tissue planarity and visibility. These results demonstrate the feasibility of automating countertraction using the proposed model.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。