A systematic review of explainable artificial intelligence methods for speech-based cognitive decline detection

对基于语音的认知衰退检测的可解释人工智能方法进行系统性综述

阅读:3

Abstract

Artificial intelligence models analyzing speech show remarkable promise for identifying cognitive decline, achieving performance comparable to clinical assessments. However, their "black box" nature poses significant barriers to clinical adoption, as healthcare professionals require transparent decision-making processes. This challenge is compounded by regulatory requirements, including GDPR mandates for explainability and medical device regulations emphasizing AI transparency. Following PRISMA guidelines, we systematically reviewed explainable AI (XAI) techniques for speech-based detection of Alzheimer's disease and mild cognitive impairment across six databases through May 2025. From 2077 records, 13 studies met the inclusion criteria, employing XAI methods including SHAP, LIME, attention mechanisms, and novel approaches across machine learning architectures. Models achieved AUC values of 0.76-0.94, consistently identifying acoustic markers (pause patterns, speech rate) and linguistic features (vocabulary diversity, pronoun usage). While XAI techniques demonstrate promise for clinical interpretability, significant gaps remain in stakeholder engagement, real-world validation, and standardized evaluation frameworks.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。