A novel hybrid model of simplified and external attention coupled with enhanced CNN for medical image segmentation

一种结合简化注意力机制和外部注意力机制以及增强型卷积神经网络的新型混合模型,用于医学图像分割

阅读:1

Abstract

Although UNet has proven its success in various tasks involving medical image segmentation, its capacity to capture global context is restricted by the finite receptive field inherent to convolutional operations. Transformer is capable of capturing long-range dependencies. Consequently, integrating transformer into UNet can alleviate the issue of its limited receptive field. However, transformer usually relies heavily on large-scale pre-training and struggles to capture local features. To address these challenges, we propose SimEANet, a network that employs an encoder-decoder structure with a hybrid CNN-Transformer architecture. We design an enhanced ResNet as a shallow feature extractor for the encoder. Furthermore, we introduce SimEA transformer as the backbone of the encoder. Finally, we use improved cascaded upsampling processors to obtain the segmentation result. The performance of SimEANet is substantiated through rigorous testing on two public accessible datasets. Extensive experiments demonstrate the high competitiveness of our approach, achieving average Dice Similarity Coefficients (DSC) of 82.35% and 91.85% on two datasets. SimEANet notably enhances performance in multi-organ segmentation tasks, achieving an advanced level of segmentation accuracy.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。