Abstract
Misinformation remains a major challenge in today's information environment, and rapid advances in AI-driven content generation risk amplifying this problem. Generative AI represents a double-edged sword: beyond its growing utility for detecting misinformation, it can also facilitate democratic deliberation, counter conspiracy narratives, and promote reliable information, even as the same technologies enable the rapid, large-scale production of persuasive false content. Understanding how people perceive AI-generated misinformation is therefore crucial for designing effective interventions and safeguarding information integrity. To address this, we embedded a preregistered experiment in a large-scale web survey conducted across 27 European countries. Participants were presented with eight short news headlines related to the Russo-Ukrainian war: four AI-generated and four human-generated, evenly split between real and fake news. For each headline, respondents assessed its perceived veracity and their willingness to share it. Our findings show that fake news is consistently viewed as less accurate and less likely to be shared, with systematic differences across countries and individual characteristics such as cognitive reflection, ideology, and trust. While differences between human- and AI-generated content were minimal, the results reveal broader and robust patterns in how people evaluate misinformation across diverse European contexts. These insights highlight the need to strengthen individuals' cognitive and informational resilience to counter the spread of misleading content in increasingly complex media environments.