Accuracy, quality, and readability analyses of responses from large language models to questions on pediatric dental sedation

对大型语言模型关于儿童牙科镇静问题的回答进行准确性、质量和可读性分析。

阅读:1

Abstract

BACKGROUND: Large language models (LLMs) have become increasingly integrated into healthcare communication, including dentistry. However, the extent to which they are capable of providing accurate, guideline-based information on critical topics such as pediatric dental sedation remains unclear. The aim of this study was to evaluate and compare the accuracy, content quality, and readability of responses provided by five widely used LLM-based chatbots -ChatGPT-4o, ChatGPT-3.5, Google Gemini, Microsoft Copilot, and Anthropic Claude- to clinical questions related to pediatric dental sedation. METHODS: A total of 32 clinically relevant questions covering preoperative, intraoperative, and postoperative aspects of pediatric dental sedation were presented to each chatbot. Responses were assessed independently by two blinded experts using an evidence-based grading system for accuracy, DISCERN tool for content quality. Also, readability levels were calculated using Flesch-Kincaid Grade Level formula. Data were analysed using Kruskal–Wallis tests to evaluate overall differences among chatbot groups, followed by pairwise comparisons for multiple testing. Inter-reviewer reliability for DISCERN scores was assessed using intraclass correlation coefficients. Descriptive statistics were calculated for each metric. The statistical significance level was set at 0.05. RESULTS: Gemini and ChatGPT-4o achieved the highest accuracy, providing the majority of their responses in full compliance with the guidelines. ChatGPT-3.5 and Claude performed moderately, while Copilot showed the lowest accuracy and highest rate of guideline deviation. In content quality, ChatGPT-4o recorded the highest mean DISCERN score (57.77), closely followed by Gemini (57.56), however no significance was detected between them (p > 0.05). Readability analysis revealed that ChatGPT-3.5 produced the most accessible content, while Claude’s responses were the most complex. Inter-rater reliability for DISCERN scoring was excellent (> 0.85) for all bots, supporting the robustness of the evaluations. CONCLUSIONS: Despite the fact that ChatGPT-4o and Gemini exhibited superior performance in general, none of the evaluated chatbots fully aligned with clinical guidelines or consistently achieved high accuracy across all phases. These findings underscore the imperative for expert oversight when employing AI chatbots for pediatric dental sedation information. It is recommended that future research concentrate on multilingual testing, iterative dialogue-based evaluations, and domain-specific fine-tuning with a view to enhancing clinical applicability and patient safety. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12903-026-08026-x.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。