A hybrid CNN-ViT framework with cross-attention fusion and data augmentation for robust brain tumor classification

一种结合交叉注意力融合和数据增强的混合 CNN-ViT 框架,用于稳健的脑肿瘤分类

阅读:1

Abstract

Brain tumor classification from MRI scans is a challenging task that requires accurate and timely detection increase patient survival rates. Conventional machine learning methods with hand-crafted features often fail to handle different sizes, forms, and textures of tumors. In this study evaluates, standard transfer learning models (AlexNet, MobileNetV2, InceptionV3, ResNet50, VGG16, VGG19) and conventional classifier models such as Decision Tree, Naïve Bayes, LDA were evaluated for multiclass brain tumor classification. The Vision Transformer (ViT) which leverages global context modeling achieved accuracy of 87.34%. To further improve performance, a hybrid CNN–ViT framework named CAFNet with data augmentation and a Cross-Attention Fusion mechanism achieving a test accuracy of 96.41% on a multiclass MRI dataset. The results show that CAFNet significantly outperforms conventional machine learning, deep learning and transfer learning models for robust brain tumor classification.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。