Assessing the accuracy and explainability of using ChatGPT to evaluate the quality of health news

评估使用 ChatGPT 评价健康新闻质量的准确性和可解释性

阅读:1

Abstract

BACKGROUND: With the growing prevalence of health misinformation online, there is an urgent need for tools that can reliably assist the public in evaluating the quality of health information. This study investigates the performance of GPT-3.5-Turbo, a representative and widely used large language model (LLM), in rating the quality of health news and providing explanatory justification for the rating assessment. METHODS: We evaluated GPT-3.5-Turbo’s performance on 3222 health news articles from an expert-annotated dataset compiled by HealthNewsReview.org, which assesses the quality of health news across nine criteria. GPT-3.5-Turbo was prompted with standardized queries tailored to each criterion. We measured its rating performance using 95% confidence intervals for precision, recall, and F1 scores in binary classification (satisfactory/not satisfactory). Additionally, linguistic complexity, readability, and the quality of GPT-3.5-Turbo’s explainability were assessed through both quantitative linguistic analysis and qualitative evaluation of consistency and contextual relevance. RESULTS: GPT-3.5-Turbo’s rating performance varied across criteria, with the highest accuracy for the Cost criterion (F1 = 0.824) but lower accuracy for Benefit, Conflict, and Quality criteria (F1 < 0.5), underperforming compared to traditional supervised machine learning models. However, its explanations were clear, with readability suited to late high school or early college levels and scored highly for consistency (average score: 2.90/3) and contextual relevance (average score: 2.73/3). These findings highlight GPT-3.5-Turbo’s strength in providing understandable and contextually relevant explanations, despite that its rating accuracy is limited. CONCLUSION: While GPT-3.5-Turbo’s rating accuracy requires improvement, its strength in offering comprehensible and contextually relevant explanations presents a valuable opportunity to enhance public understanding of health news quality. Leveraging LLMs as complementary tools for health literacy initiatives could help mitigate misinformation by facilitating non-expert audiences to interpret and assess health information. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12889-025-23206-0.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。