Evaluating the Impact of Attention Mechanisms on a Fine-Tuned Neural Network for Magnetic Resonance Imaging Tumor Classification: A Comparative Analysis

评估注意力机制对用于磁共振成像肿瘤分类的微调神经网络的影响:一项比较分析

阅读:1

Abstract

Background Magnetic resonance imaging (MRI) is essential for brain tumor diagnosis. Deep learning models, such as Residual Network 50 Version 2 (ResNet50V2), have demonstrated strong performance in tumor classification. However, integrating attention mechanisms may further enhance diagnostic accuracy. This study evaluates the impact of different attention mechanisms on a ResNet50V2-based MRI tumor classification model for distinguishing between meningioma, glioma, pituitary tumors, and cases with no tumor. Methods A ResNet50V2-based model was trained on 3,096 annotated MRI scans from a publicly available dataset on Kaggle. Five model configurations were evaluated: baseline ResNet50V2, Squeeze-and-Excitation (SE), Convolutional Block Attention Module (CBAM), Self-Attention (SA), and Attention Gated Network (AGNet). Performance was assessed using accuracy, area under the receiver operating characteristic curve (AUC), precision, and recall. Two-proportion Z-tests were conducted to compare classification accuracies among models. Results The SE-enhanced model achieved the highest classification performance, with an accuracy of 98.4% and an AUC of 1.00, outperforming the base ResNet50V2 (92.6%) and other attention-based frameworks (CBAM: 93.5%, SA: 91.6%, AGNet: 94.2%). Compared to the baseline model, the SE model also demonstrated improved meningioma and pituitary tumor classification (Z = 2.485, p = 0.013 and Z = 2.423, p = 0.015, respectively). Additionally, the SE model demonstrated superior precision and recall across all tumor classes. Conclusion Incorporating attention mechanisms significantly improves MRI-based tumor classification, with SE proving to be the most effective. These findings suggest that SE-enhanced models may improve diagnostic accuracy in both research and clinical applications. Future research should explore hybrid attention mechanisms, such as transformer-based models, and their broader applications in medical imaging.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。