A Review of Deep Learning Approaches Based on Segment Anything Model for Medical Image Segmentation

基于Segment Anything模型的深度学习医学图像分割方法综述

阅读:1

Abstract

Medical image segmentation has undergone significant changes in recent years, mainly due to the development of base models. The introduction of the Segment Anything Model (SAM) represents a major shift from task-specific architectures to universal architectures. This review discusses the adaptation of SAM in medical visualisation, focusing on three primary domains. Firstly, multimodal fusion frameworks implement semantic alignment of heterogeneous visual methods. Secondly, volumetric extensions transition from slice-based processing to native 3D spatial reasoning with architectures such as SAM3D, ProtoSAM-3D, and VISTA3D. Thirdly, uncertainty-aware architectures integrate probabilistic calibration for clinical interpretability, as illustrated by the SAM-U and E-Bayes SAM models. A comparative analysis reveals that SAM derivatives with effective parameters achieve Dice coefficients of 81-95%, while concomitantly reducing annotation requirements by 56-73%. Future research directions include incorporating adaptive domain hints, Bayesian self-correction mechanisms, and unified volumetric frameworks to enable autonomous generalisation across diverse medical imaging contexts.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。