Abstract
Objective: Esophageal cancer (EC) is difficult to visually identify, rendering early detection crucial to avert the advancement and decline of the patient's health. Methodology: This work aimed to acquire spectral information from EC images via Spectrum-Aided Visual Enhancer (SAVE) technology, which improves imaging beyond the limitations of conventional White-Light Imaging (WLI). The hyperspectral data acquired using SAVE were examined utilizing sophisticated deep learning methodologies, incorporating models such as YOLOv8, YOLOv7, YOLOv6, YOLOv5, Scaled YOLOv4, and YOLOv3. The models were assessed to create a reliable detection framework for accurately identifying the stage and location of malignant lesions. Results: The comparative examination of these models demonstrated that the SAVE method regularly surpassed WLI for specificity, sensitivity, and overall diagnostic efficacy. Significantly, SAVE improved precision and F1 scores for the majority of the models, which are essential measures for enhancing patient care and customizing effective medicines. Among the evaluated models, YOLOv8 showed exceptional performance. YOLOv8 demonstrated increased sensitivity to squamous cell carcinomas (SCCs), but YOLOv5 provided reliable outcomes across many situations, underscoring its adaptability. Conclusions: These findings highlight the clinical importance of combining SAVE technology with deep learning models for esophageal cancer screening. The enhanced diagnostic accuracy provided by SAVE, especially when integrated with CAD models, offers potential for improving early detection, precise diagnosis, and tailored treatment approaches in clinically pertinent scenarios.