Beyond Risk Reduction: Vigilant Trust in Artificial Intelligence Based on Evidence from China

超越风险规避:基于中国经验对人工智能保持警惕和信任

阅读:1

Abstract

Public trust in artificial intelligence (AI) is often assumed to promote acceptance by reducing perceived risks. Using a nationally representative survey of 10,294 Chinese adults, this study challenges that assumption and introduces the concept of vigilant trust. We argue that trust in AI does not necessarily diminish risk awareness but can coexist with, and even intensify, attention to potential harms. By examining four dimensions of trust-trusting stance, competence, benevolence, and integrity-we find that all of them consistently enhance perceived benefits, which emerge as the strongest predictor of AI acceptance. However, trust shows differentiated relationships with perceived risks: benevolence reduces risk perception, whereas trusting stance is associated with higher perceptions of both benefits and risks. Perceived risks do not uniformly deter acceptance and, in some contexts, are positively associated with willingness to adopt AI. By moving beyond the conventional view of trust as a risk-reduction mechanism, this study conceptualizes vigilant trust as a mode of engagement in which openness to AI is accompanied by sustained awareness of uncertainty. The findings offer a more nuanced understanding of public acceptance of AI and its implications for governance and communication.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。