Explainability increases trust resilience in intelligent agents

可解释性增强了智能体的信任韧性

阅读:1

Abstract

Even though artificial intelligence (AI)-based systems typically outperform human decision-makers, they are not immune to errors, leading users to lose trust in them and be less likely to use them again-a phenomenon known as algorithm aversion. The purpose of the present research was to investigate whether explainable AI (XAI) could function as a viable strategy to counter algorithm aversion. We conducted two experiments to examine how XAI influences users' willingness to continue using AI-based systems when these systems exhibit errors. The results showed that, following the observation of algorithms erring, the inclination of users to delegate decisions to or follow advice from intelligent agents significantly decreased compared to the period before the errors were revealed. However, the explainability effectively mitigated this decline, with users in the XAI condition being more likely to continue utilizing intelligent agents for subsequent tasks after seeing algorithms erring than those in the non-XAI condition. We further found that the explainability could reduce users' decision regret, and the decrease in decision regret mediated the relationship between the explainability and re-use behaviour. These findings underscore the adaptive function of XAI in alleviating negative user experiences and maintaining user trust in the context of imperfect AI.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。