Construction and Interpretability of a Multimodal Deep Learning Model of Electronystagmography-Optical Coherence Tomography Angiography for Early Screening of Alzheimer's Disease

构建和解释用于阿尔茨海默病早期筛查的眼震电图-光学相干断层扫描血管造影多模态深度学习模型

阅读:1

Abstract

We aimed to explore the value and interpretability of a multimodal deep learning model integrating optical coherence tomography angiography (OCTA) and electronystagmography (ENG) for the early screening of Alzheimer's disease (AD) and mild cognitive impairment (MCI). A total of 250 subjects were retrospectively recruited. OCTA images, ENG signals and neurocognitive scores were collected from all subjects. The model had an area under curve of 0.85 for the independent validation cohort, with the sensitivity and specificity of 0.73 and 0.90 at the optimal cut-off of receiver operating characteristic curve, respectively. According to Gradient-weighted Class Activation Mapping analysis, the model focused on regions with reduced microvascular density. SHapley Additive exPlanations analysis revealed that saccade accuracy (left eye), saccade latency (right eye) and smooth pursuit gain (left eye) contributed the most to the model. The multimodal model effectively improves early, non-invasive screening of AD/MCI with good interpretability.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。