Domain-agnostic weakly supervised surgical instrument segmentation

与领域无关的弱监督手术器械分割

阅读:1

Abstract

Recent advancements in visual foundation models open new avenues in the field of surgical instrument segmentation in medical images. Segmentation foundation models provide high segmentation accuracy for objects of interest that are selected via prompts in the form of points, bounding boxes, or text. However, the choice of suitable prompts either requires manual interaction or relies on two-stage pipelines based on supervised, typically domain-specific models. This limits their applicability for domain-agnostic surgical instrument segmentation. We propose a method for surgical instrument segmentation that leverages the power of the segmentation foundation model SAM2 while eliminating the need for a user-defined input prompt or domain-specific annotated datasets. We achieve this by utilizing an anomaly detector generated from non-instrument images to identify instruments as unseen regions and in this way, define a SAM2 input prompt based solely on image-level annotations. For three datasets for surgical instrument segmentation from diverse domains (EndoVis2017, CaDIS, and PASO-SIS), we achieve mean Normalized Surface Distances ranging from [Formula: see text]. This demonstrates the competitiveness of our method compared to alternatives, while its training- and mask-free nature makes it well-suited for surgical workflow integration. By simplifying surgical instrument segmentation, we advance the field of computer-assisted surgery and unlock a wide variety of assistance functions with minimal effort. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1038/s41598-026-43054-1.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。