DescriptorMedSAM: language-image fusion with multi-aspect text guidance for medical image segmentation

DescriptorMedSAM:一种基于多方面文本引导的语言-图像融合医学图像分割方法

阅读:1

Abstract

Accurate organ segmentation is essential for clinical tasks such as radiotherapy planning and disease monitoring. Recent foundation models like MedSAM achieve strong results using point or bounding‑box prompts but still require manual interaction. We propose DescriptorMedSAM, a lightweight extension of MedSAM that incorporates structured text prompts, ranging from simple organ names to combined shape and location descriptors to enable click‑free segmentation. DescriptorMedSAM employs a CLIP text encoder to convert radiology‑style descriptors into dense embeddings, which are fused with visual tokens via a cross‑attention block and a multi-scale feature extractor. We designed four descriptor types: Name (N), Name + Shape (NS), Name + Location (NL), and Name + Shape + Location (NSL), and evaluated them on the FLARE 2022 dataset under zero‑shot and few‑shot settings, where organs unseen during training must be segmented with minimal additional data. NSL prompts achieved the highest performance, with a Dice score of 0.9405 under full supervision, a 76.31% zero‑shot retention ratio, and a 97.02% retention ratio after fine‑tuning with only 50 labeled slices per unseen organ. Adding shape and location cues consistently improved segmentation accuracy, especially for small or morphologically complex structures. We demonstrate that structured language prompts can effectively replace spatial interactions, delivering strong zero‑shot performance and rapid few‑shot adaptation. By quantifying the role of descriptor, this work lays the groundwork for scalable, prompt‑aware segmentation models that generalize across diverse anatomical targets with minimal annotation effort.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。