Machine learning models explanations as interpretations of evidence: a theoretical framework of explainability and its implications on high-stakes biomedical decision-making

机器学习模型将解释视为对证据的诠释:可解释性的理论框架及其对高风险生物医学决策的影响

阅读:1

Abstract

Explainable Artificial Intelligence, or XAI, is a vibrant research topic in the artificial intelligence community. It is raising growing interest across methods and domains, especially those involving high-stakes decision-making, such as the biomedical sector. Much has been written about the subject, yet XAI still lacks shared terminology and a framework capable of providing structural soundness to explanations, a crucial need for decisions that impact healthcare. In our work, we address these issues by proposing a novel definition of explanation that synthesizes insights from the existing literature. We recognize that explanations are not atomic, but rather the combination of evidence stemming from the model and its input-output mapping, along with the human interpretation of this evidence. Furthermore, we fit explanations into the properties of faithfulness (i.e., the explanation is an accurate description of the model’s inner workings and decision-making process) and plausibility (i.e., how much the explanation seems convincing to the user). Our theoretical framework simplifies the operationalization of these properties and provides new insights into common explanation methods that we analyze through case studies. We explore the impact of our framework in the sensitive domain of biomedicine, where XAI can play a central role in generating trust by balancing faithfulness and plausibility.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。