Plug-and-play segment anything model improves nnUNet performance

即插即用的分段式模型提升了nnUNet的性能

阅读:2

Abstract

BACKGROUND: The automatic segmentation of medical images has widespread applications in modern clinical workflows. The Segment Anything Model (SAM), a recent development of foundational models in computer vision, has become a universal tool for image segmentation without the need for specific domain training. However, SAM's reliance on prompts necessitates human-computer interaction during the inference process. Its performance on specific domains can also be limited without additional adaptation. In contrast, traditional models like nnUNet are designed to perform segmentation tasks automatically during inference and can work well for each specific domain, but they require extensive training on domain-specific datasets. PURPOSE: To leverage the advantages of both foundational and domain-specific models and achieve fully automated segmentation with limited training samples, we propose nnSAM, which combines the robust feature extraction capabilities of SAM with the automatic configuration abilities of nnUNet to enhance the accuracy and robustness of medical image segmentation on small datasets. METHODS: We propose the nnSAM model for small sample medical image segmentation. We made optimizations for this goal via two main approaches: first, we integrated the feature extraction capabilities of SAM with the automatic configuration advantages of nnUNet, which enables robust feature extraction and domain-specific adaptation on small datasets. Second, during the training process, we designed a boundary shape supervision loss based on level set functions and curvature calculations, enabling the model to learn anatomical shape priors from limited annotation data. RESULTS: We conducted quantitative and qualitative assessments on the performance of our proposed method on four segmentation tasks: brain white matter, liver, lung, and heart segmentation. Our method achieved the best performance across all tasks. Specifically, in brain white matter segmentation using 20 training samples, nnSAM achieved the highest DICE score of 82.77 ( ± 10.12) % and the lowest average surface distance (ASD) of 1.14 ( ± 1.03) mm, compared to nnUNet, which had a DICE score of 79.25 ( ± 17.24) % and an ASD of 1.36 ( ± 1.63) mm. A sample size study shows that the advantage of nnSAM becomes more prominent under fewer training samples. CONCLUSIONS: A comprehensive evaluation of multiple small-sample segmentation tasks demonstrates significant improvements in segmentation performance by nnSAM, highlighting the vast potential of small-sample learning.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。