A hybrid CNN-Transformer network integrating multiscale spatially detailed features for medical image segmentation

一种融合多尺度空间细节特征的混合 CNN-Transformer 网络用于医学图像分割

阅读:2

Abstract

The rapid advancement of deep learning has established Convolutional Neural Networks (CNNs) as mainstream for medical image segmentation, yet their limited receptive field hinders long-range dependency capture. While Transformers excel at modeling global features via self-attention, their high computational complexity burdens high-resolution image processing. To leverage the complementary strengths of both architectures and integrate local and global features under a lightweight framework for enhanced accuracy and efficiency, this work proposes a novel encoder based on parallel CNN and Swin Transformer. Its effective integration is the Semantics and Detail Infusion (SDI) module, which fuses multi-scale features and employs attention to prioritize critical details, enriching features for decoder resolution recovery. Evaluations were conducted on two publicly available datasets, namely the Synapse Multi-Organ Segmentation dataset and the Aortic Vessel Tree dataset. The proposed model achieved Dice coefficients of 84.19% and 87.91%, respectively, and corresponding Hausdorff Distances of 12.64 mm and 7.06 mm. These results represent significant improvements over the UNet benchmark, with Dice score gains of 7.34% and 5.02%, respectively. The results further underscore the model's robustness, efficiency, and clinical relevance in accurately delineating complex anatomical structures, particularly in abdominal segmentation tasks. By effectively fusing CNN and Transformer advantages, our approach meets high-performance standards for medical image segmentation while offering practical benefits for real-world clinical deployment in resource-constrained environments. The code is publicly available on https://github.com/Palpitate-v/HybridNet.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。