Beliefs and sharing intentions of human- and AI-generated fake news: Evidence from 27 European countries

人类和人工智能生成的虚假新闻背后的信念和传播意图:来自27个欧洲国家的证据

阅读:2

Abstract

Misinformation remains a major challenge in today's information environment, and rapid advances in AI-driven content generation risk amplifying this problem. Generative AI represents a double-edged sword: beyond its growing utility for detecting misinformation, it can also facilitate democratic deliberation, counter conspiracy narratives, and promote reliable information, even as the same technologies enable the rapid, large-scale production of persuasive false content. Understanding how people perceive AI-generated misinformation is therefore crucial for designing effective interventions and safeguarding information integrity. To address this, we embedded a preregistered experiment in a large-scale web survey conducted across 27 European countries. Participants were presented with eight short news headlines related to the Russo-Ukrainian war: four AI-generated and four human-generated, evenly split between real and fake news. For each headline, respondents assessed its perceived veracity and their willingness to share it. Our findings show that fake news is consistently viewed as less accurate and less likely to be shared, with systematic differences across countries and individual characteristics such as cognitive reflection, ideology, and trust. While differences between human- and AI-generated content were minimal, the results reveal broader and robust patterns in how people evaluate misinformation across diverse European contexts. These insights highlight the need to strengthen individuals' cognitive and informational resilience to counter the spread of misleading content in increasingly complex media environments.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。