[Shape-aware cross-modal domain adaptive segmentation model]

[形状感知跨模态域自适应分割模型]

阅读:1

Abstract

Cross-modal unsupervised domain adaptation (UDA) aims to transfer segmentation models trained on a labeled source modality to an unlabeled target modality. However, existing methods often fail to fully exploit shape priors and intermediate feature representations, resulting in limited generalization ability of the model in cross-modal transfer tasks. To address this challenge, we propose a segmentation model based on shape-aware adaptive weighting (SAWS) that enhance the model's ability to perceive the target area and capture global and local information. Specifically, we design a multi-angle strip-shaped shape perception (MSSP) module that captures shape features from multiple orientations through an angular pooling strategy, improving structural modeling under cross-modal settings. In addition, an adaptive weighted hierarchical contrastive (AWHC) loss is introduced to fully leverage intermediate features and enhance segmentation accuracy for small target structures. The proposed method is evaluated on the multi-modality whole heart segmentation (MMWHS) dataset. Experimental results demonstrate that SAWS achieves superior performance in cross-modal cardiac segmentation tasks, with a Dice score (Dice) of 70.1% and an average symmetric surface distance (ASSD) of 4.0 for the computed tomography (CT)→magnetic resonance imaging (MRI) task, and a Dice of 83.8% and ASSD of 3.7 for the MRI→CT task, outperforming existing state-of-the-art methods. Overall, this study proposes a cross-modal medical image segmentation method with shape-aware, which effectively improves the structure-aware ability and generalization performance of the UDA model.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。