Abstract
BACKGROUND: While the technological advancements of Generative Artificial Intelligence are widely recognized, how they reshape the psychological mechanisms of human persuasion and information processing remains underexplored. This study addresses this gap by examining the persuasion mechanisms of AI-generated rumors on internet users, drawing on the Elaboration Likelihood Model (ELM). METHODS: A systematic content analysis was conducted on a large dataset of 11,942 online comments responding to various AI-generated rumors. Using an established coding scheme and a reliability testing procedure, each comment was classified as indicative of either central or peripheral route processing. RESULTS: The analysis reveals that 90.5% of the comments demonstrated peripheral route processing, with emotional expression as the primary indicator. Only 9.5% of the comments reflected central route processing, most of which involved users providing reasons or evidence, or questioning the source. DISCUSSION: We argue that the "technological realism" of AI-generated content plays a key role in this pattern. It diminishes users' ability and motivation to engage in deeper cognitive elaboration, leading them to rely predominantly on the peripheral route for persuasion. These findings extend the Elaboration Likelihood Model to the age of AI and offer practical insights for online platform management, cybersecurity enhancement, and public education in digital literacy.