Abstract
Diabetic Retinopathy (DR) continues to be the leading cause of preventable blindness worldwide, and there is an urgent need for accurate and interpretable framework. A Multi View Cross Attention Vision Transformer (MVCAViT) framework is proposed in this research paper for utilizing the information-complementarity between the dually available macula and optic disc center views of two images from the DRTiD dataset. A novel cross attention-based model is proposed to integrate the multi-view spatial and contextual features to achieve robust fusion of features for comprehensive DR classification. A Vision Transformer and Convolutional neural network hybrid architecture learns global and local features, and a multitask learning approach notes diseases presence, severity grading and lesions localisation in a single pipeline. Results show that the proposed framework achieves high classification accuracy and lesion localization performance, supported by comprehensive evaluations on the DRTiD dataset. Attention-based visualizations further enhance interpretability, indicating the framework's potential for clinical use. This framework establishes a criterion for improving state-of-the-art retinal image analysis for DR diagnosis which may result in better patient results and final clinical decision.