Abstract
Public trust in artificial intelligence (AI) is often assumed to promote acceptance by reducing perceived risks. Using a nationally representative survey of 10,294 Chinese adults, this study challenges that assumption and introduces the concept of vigilant trust. We argue that trust in AI does not necessarily diminish risk awareness but can coexist with, and even intensify, attention to potential harms. By examining four dimensions of trust-trusting stance, competence, benevolence, and integrity-we find that all of them consistently enhance perceived benefits, which emerge as the strongest predictor of AI acceptance. However, trust shows differentiated relationships with perceived risks: benevolence reduces risk perception, whereas trusting stance is associated with higher perceptions of both benefits and risks. Perceived risks do not uniformly deter acceptance and, in some contexts, are positively associated with willingness to adopt AI. By moving beyond the conventional view of trust as a risk-reduction mechanism, this study conceptualizes vigilant trust as a mode of engagement in which openness to AI is accompanied by sustained awareness of uncertainty. The findings offer a more nuanced understanding of public acceptance of AI and its implications for governance and communication.