Reducing inference cost of Alzheimer's disease identification using an uncertainty-aware ensemble of uni-modal and multi-modal learners

利用感知不确定性的单模态和多模态学习器集成模型降低阿尔茨海默病识别的推理成本

阅读:1

Abstract

While multi-modal deep learning approaches trained using magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG PET) data have shown promise in the accurate identification of Alzheimer's disease, their clinical applicability is hindered by the assumption that both modalities are always available during model inference. In practice, clinicians adjust diagnostic tests based on available information and specific clinical contexts. We propose a novel MRI- and FDG PET-based multi-modal deep learning approach that mimics clinical decision-making by incorporating uncertainty estimates of an MRI-based model (generated using Monte Carlo dropout and evidential deep learning) to determine the necessity of an FDG PET scan, and only inputting the FDG PET to a multi-modal model when required. This approach significantly reduces the reliance on FDG PET scans, which are costly and expose patients to radiation. Our approach reduces the need for FDG PET by up to 92% without compromising model performance, thus optimizing resource use and patient safety. Furthermore, using a global model explanation technique, we provide insights into how anatomical changes in brain regions-such as the entorhinal cortex, amygdala, and ventricles-can positively or negatively influence the need for FDG PET scans in alignment with clinical understanding of Alzheimer's disease.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。