Evaluating segment anything model (SAM) on MRI scans of brain tumors

评估分割任意模型(SAM)在脑肿瘤MRI扫描中的应用

阅读:1

Abstract

Addressing the challenge of automatically segmenting anatomical structures from brain images has been a long-standing problem, attributed to subject- and image-based variations and constraints in available data annotations. The Segment Anything Model (SAM), developed by Meta, is a foundational model trained to provide zero-shot segmentation outputs with or without interactive user inputs, demonstrating notable performance on various objects and image domains without explicit prior training. This study evaluated SAM's performance in brain tumor segmentation using two publicly available Magnetic Resonance Imaging (MRI) datasets. The study analyzed SAM's standalone segmentation as well as its performance when provided user interaction through point prompts and bounding box inputs. SAM exhibited versatility across configurations and datasets, with the bounding box consistently outperforming others in achieving superior localized precision, with average Dice scores of 0.68 for TCGA and 0.56 for BRATS, along with average IoU values of 0.89 and 0.65, respectively, especially for tumors with low-to-medium curvature. Inconsistencies were observed, particularly in relation to variations in tumor size, shape, and textural features. The conclusion drawn from the study is that while SAM can automate medical image segmentation, further training and careful implementation are necessary for diagnostic purposes, especially with challenging cases such as MRI scans of brain tumors.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。