Abstract
Glaucoma remains one of the primary causes of irreversible blindness, characterized by gradual damage to the optic nerve, which often goes undetected until advanced stages. Accurate and early diagnosis depends heavily on precise segmentation of the optic disc and optic cup in retinal fundus images, as this enables reliable calculation of the cup-to-disc ratio, a key biomarker for identifying and monitoring glaucoma. However, many current deep learning approaches struggle with generalization because of challenges such as variable image quality, blood vessel occlusion, and structural ambiguities, leading to reduced segmentation and classification accuracy. To address these issues, this study introduces DB-SegNet, an advanced diagnostic framework designed to enhance both segmentation accuracy and glaucoma detection. The proposed architecture builds upon SegNet by incorporating a Dilated Atrous Context Module (DACM) to capture multi-scale contextual features and a Bidirectional Feature Calibration Unit (BFCU) to refine boundary details. Feature space optimization is achieved through the Bitterling Fish Optimization (BFO) algorithm, while a Multi-Scale Attention Transformer (MSAT) is employed to model long-range spatial dependencies. In addition, Honey Badger Optimization (HBO) is applied for hyperparameter fine-tuning, ensuring stable and precise convergence. Evaluation on three widely used benchmark datasets such as Drishti-GS1, RIM-ONE, and ORIGA-Light demonstrates the effectiveness of the framework, yielding Dice coefficients of 99.2% for optic disc and 98.3% for optic cup segmentation, along with classification accuracies of 98.7% (RIM-ONE) and 99.1% (ORIGA-Light). These outcomes highlight the robustness of DB-SegNet in overcoming the limitations of existing techniques and underscore its potential as a clinically reliable tool for large-scale glaucoma screening and early intervention planning.