Benchmarking reinforcement learning algorithms for autonomous mechanical thrombectomy

对用于自主机械血栓切除术的强化学习算法进行基准测试

阅读:1

Abstract

PURPOSE: Mechanical thrombectomy (MT) is the gold standard for treating acute ischemic stroke. However, challenges such as operator radiation exposure, reliance on operator experience, and limited treatment access remain. Although autonomous robotics could mitigate some of these limitations, current research lacks benchmarking of reinforcement learning (RL) algorithms for MT. This study aims to evaluate the performance of Deep Deterministic Policy Gradient, Twin Delayed Deep Deterministic Policy Gradient, Soft Actor-Critic, and Proximal Policy Optimization for MT. METHODS: Simulated endovascular interventions based on the open-source stEVE platform were employed to train and evaluate RL algorithms. We simulated navigation of a guidewire from the descending aorta to the supra-aortic arteries, a key phase in MT. The impact of tuning hyperparameters, such as learning rate and network size, was explored. Optimized hyperparameters were used for assessment on an MT benchmark. RESULTS: Before tuning, Deep Deterministic Policy Gradient had the highest success rate at 80% with a procedure time of 6.87 s when navigating to the supra-aortic arteries. After tuning, Proximal Policy Optimization achieved the highest success rate at 84% with a procedure time of 5.08 s. On the MT benchmark, Twin Delayed Deep Deterministic Policy Gradient recorded the highest success rate at 68% with a procedure time of 214.05 s. CONCLUSION: This work advances autonomous endovascular navigation by establishing a benchmark for MT. The results emphasize the importance of hyperparameter tuning on the performance of RL algorithms. Future research should extend this benchmark to identify the most effective RL algorithm.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。