GCSA-SegFormer: Transformer-Based Segmentation for Liver Tumor Pathological Images.

阅读:10
作者:Wen Jingbin, Yang Sihua, Li Weiqi, Cheng Shuqun
Pathological images are crucial for tumor diagnosis; however, due to their extremely high resolution, pathologists often spend considerable time and effort analyzing them. Moreover, diagnostic outcomes can be significantly influenced by subjective judgment. With the rapid advancement of artificial intelligence technologies, deep learning models offer new possibilities for pathological image diagnostics, enabling pathologists to diagnose more quickly, accurately, and reliably, thereby improving work efficiency. This paper proposes a novel Global Channel Spatial Attention (GCSA) module aimed at enhancing the representational capability of input feature maps. The module combines channel attention, channel shuffling, and spatial attention to capture global dependencies within feature maps. By integrating the GCSA module into the SegFormer architecture, the network, named GCSA-SegFormer, can more accurately capture global information and detailed features in complex scenarios. The proposed network was evaluated on a liver dataset and the publicly available ICIAR 2018 BACH dataset. On the liver dataset, the GCSA-SegFormer achieved a 1.12% increase in MIoU and a 1.15% increase in MPA compared to baseline models. On the BACH dataset, it improved MIoU by 1.26% and MPA by 0.39% compared to baseline models. Additionally, the performance metrics of this network were compared with seven different types of semantic segmentation, showing good results in all comparisons.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。