Abstract
With the widespread adoption of AI-generated videos in media content production, their visual credibility and the associated issues of user trust have attracted increasing attention. Although AI-generated videos have become progressively more realistic, user-perceived visual anomalies (AI hallucinations) may elicit negative psychological responses, undermine trust, and ultimately shape behavioral intentions. Grounded in the Stimulus-Organism-Response (S-O-R) framework, this study systematically examines how AI hallucinations influence perceived trust through uncanny valley eeriness and perceived realism, and how these effects further translate into behavioral intention. We manipulated three levels of AI hallucination (low, medium, and high) in AI-generated video stimuli and recruited 408 participants to view and evaluate the videos. Hypotheses were tested using partial least squares structural equation modeling (PLS-SEM) and analysis of variance (ANOVA). The results indicate that AI hallucinations significantly increase uncanny valley eeriness and reduce perceived realism; both factors further affect behavioral intention via perceived trust, with perceived realism showing the strongest predictive effect on trust. Moreover, uncanny valley eeriness, perceived realism, perceived trust, and behavioral intention differed significantly across hallucination levels. These findings support the applicability of the S-O-R model in the context of viewing AI-generated videos and delineate a psychological transmission mechanism through which AI hallucinations shape trust judgments and behavioral responses via affective and cognitive processing, offering a new theoretical perspective on credibility construction in generative visual content.