Knowledge-level comparison in pulpal and periapical diseases: dental students versus artificial intelligence models (Gemini, Microsoft Copilot, ChatGPT-3.5, ChatGPT-4o): cross-sectional study

牙髓和根尖周疾病知识水平比较:牙科学生与人工智能模型(Gemini、Microsoft Copilot、ChatGPT-3.5、ChatGPT-4o):横断面研究

阅读:2

Abstract

BACKGROUND: This study explored the diagnostic accuracy of artificial intelligence (AI) chatbots and dental students when responding to questions related to pulpal and periapical diseases. Rapid advancements in AI have led to increased interest in their applicability to clinical education and decision-making in dentistry. OBJECTIVE: To compare the accuracy rates of responses given by dental students and various AI-based chatbots (ChatGPT-3.5, ChatGPT-4o, Gemini, and Microsoft Copilot) to multiple-choice questions designed to assess knowledge related to pulpal and periapical diseases. METHODS: The study included third- and fifth-year dental students representing different levels of clinical training, along with four distinct AI-based chatbots. A total of 327 responses were collected from students, while each chatbot generated 450 responses. The evaluation was based on 15 multiple-choice questions developed in accordance with the 2020 version of the American Association of Endodontists (AAE) clinical guidelines. The accuracy rates of the groups were compared using descriptive statistics, one-way ANOVA, Bonferroni post hoc tests for significant differences, and Chi-square tests for correct versus incorrect response ratios. RESULTS: The highest accuracy rate was observed among fifth-year dental students (85.1%), followed by ChatGPT-4o (79.6%), ChatGPT-3.5 (75.1%), Gemini (71.6%), third-year students (64.9%), and Microsoft Copilot (61.3%). A statistically significant difference was found among the groups (p < 0.05). ChatGPT-4o demonstrated a comparable accuracy rate to fifth-year students with more clinical experience (p > 0.05), whereas other chatbots and third-year students showed lower performance. CONCLUSION: Chatbots exhibited varying levels of accuracy in diagnosing pulpal and periapical diseases. ChatGPT-4o performed at a level similar to that of more clinically experienced students, suggesting its potential as a supportive tool in dental education and clinical decision support systems. However, the relatively lower accuracy rates of models such as Gemini and Microsoft Copilot underscore the continued importance of human expertise. These findings suggest that while AI systems may serve as complementary tools in education, they cannot fully replace clinical judgment grounded in human experience.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。