MuGu:mutual guidance learning between pretrained SAM and lightweight model for medical image segmentation

MuGu:预训练SAM与轻量级模型之间用于医学图像分割的相互指导学习

阅读:2

Abstract

Medical image segmentation (MedISeg) remains a challenging task, with current methods including the specialized but non-generalizable lightweight model and powerful yet resource-intensive pretrained model like SAM-Med. Although the existing combination methods seek to compensate for their respective weaknesses, the excessive involvement of SAM-Med often limits the performance of the lightweight model during the training process. To address this issue, we propose MuGu, a novel mutual-guidance learning framework that enables bidirectional exchange of the learning ability between the pretrained model and lightweight model. Specifically, MuGu introduces two core innovative mechanisms: 1) a Confidence Prompt Guidance (CPG) mechanism that dynamically selects the samples with the lowest inference confidence as SAM-Med’s next prompt to guide the SAM-Med’s participation in the lightweight model’s training process; 2) an Ensemble Structure Boundary Guidance (ESBG) mechanism that integrates the prediction results from both SAM-Med and the lightweight backbone via an Adaptive Combination Attention (ACA) module to provide the refined boundary features. These features are then incorporated into the optimization of the lightweight backbone. Extensive experiments on four public 2D/3D segmentation datasets demonstrate that our MuGu achieves the state-of-the-art performance relative to a strong baseline.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。