When the source is a bot: How people adapt their evaluation strategies to assess AI-generated content

当信息来源是机器人时:人们如何调整评估策略来评价人工智能生成的内容

阅读:2

Abstract

Generative artificial intelligence (GenAI) blurs the boundaries between expert and non-expert sources, as it increasingly distributes and creates scientific content. This study examines how individuals adapt evaluation strategies, including content and source evaluation, and corroboration, when using GenAI versus a search engine. Based on performance tasks in which participants evaluated science-related socio-scientific dilemmas and follow-up interviews with 30 adult participants from diverse educational backgrounds, findings reveal that users employed these strategies on both platforms but adapted them in distinct ways. We identified two evaluation strategies that emerged as analytical constructs from the qualitative data. First, to corroborate output, participants frequently used a strategy we titled 'representation evaluation,' assessing whether GenAI accurately summarized its sources rather than verifying source agreement independently. Second, participants also applied 'meta source evaluation,' relying on their familiarity with sources provided by GenAI instead of directly evaluating the sources themselves. Although all participants engaged in dialogue with the chat, they did not leverage the bot's dialogue capabilities to assess credibility, and many relied on a "machine heuristic", assuming GenAI's inherent correctness, reflecting a well-documented over-trust in automated systems. This research underscores the importance of developing and assessing critical evaluation skills for navigating AI-generated scientific information. Specifically, it extends existing models of online information evaluation to contexts mediated by artificial intelligence.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。