Abstract
We aimed to explore the value and interpretability of a multimodal deep learning model integrating optical coherence tomography angiography (OCTA) and electronystagmography (ENG) for the early screening of Alzheimer's disease (AD) and mild cognitive impairment (MCI). A total of 250 subjects were retrospectively recruited. OCTA images, ENG signals and neurocognitive scores were collected from all subjects. The model had an area under curve of 0.85 for the independent validation cohort, with the sensitivity and specificity of 0.73 and 0.90 at the optimal cut-off of receiver operating characteristic curve, respectively. According to Gradient-weighted Class Activation Mapping analysis, the model focused on regions with reduced microvascular density. SHapley Additive exPlanations analysis revealed that saccade accuracy (left eye), saccade latency (right eye) and smooth pursuit gain (left eye) contributed the most to the model. The multimodal model effectively improves early, non-invasive screening of AD/MCI with good interpretability.