The reliability of answers from four different AI chatbots on periodontology theoretical exam questions: an evaluation in dental education

四种不同人工智能聊天机器人对牙周病学理论考试题回答的可靠性:一项牙科教育评估

阅读:1

Abstract

BACKGROUND: Dentistry is a profession affected by modern technology, materials, and societal events like the pandemic, leading to an academic field that continuously evolves in both practice and education. Consequently, advancements include the extensive implementation of digital dentistry, the incorporation of remote instruction into dental training throughout the pandemic, and the investigation of optimal AI integration within the dental profession and education are necessary. This study evaluated the reliability of answers provided by four different major artificial intelligence (AI) chatbot using 125 periodontology exam questions administered between 2018−2023. METHODS: This study used closed-ended questions retrieved from the official archives of the Department of Periodontology, Faculty of Dentistry, Istanbul Aydin University originally included in exams given to 3rd, 4th, and 5th-year students between 2018 and 2023. These questions were then posed to AI chatbots for evaluation. These include 92 of the questions are true/false, 8 are fill-in-the-blank, 22 are multiple-choice, and 3 are calculation questions. Questions were asked to each AI chatbot (ChatGPT-4o mini, ChatGPT-4o, Gemini Advance, and CoPilot Pro) twice, with a one-month interval, and evaluated on a binary scoring system. Before the questions were asked to the AI chatbots, the chat histories and cookies were cleared from the user interfaces, and a previously unused e-mail address was used to log in. The questions were asked one at a time, and the next question was not asked until the previous one was answered. The NCSS (Number Cruncher Statistical System) 2007 (Kaysville, Utah, USA) program was used for statistical analyses. While evaluating the study data, descriptive statistical methods were used. In the comparison of qualitative data across three or more periods, the Cochran’s-Q test was used, and the Mc Nemar test was used for post hoc analyses. Statistical significance level was set at p < 0.01 and p < 0.05 levels. RESULTS: CoPilot Pro achieved the highest accuracy rate both on Day-0 (73.6%) and after one month (75.2%). When comparing the performance of AI chatbots on Day-0 and Month-1, no statistically significant difference was found. However, GPT-4o mini performed significantly worse than the other three AI chatbots at both time points (p < 0.05). The performance of GPT-4o was the most inconsistent, as 19 questions answered correctly in the first round were answered incorrectly in the second round. CONCLUSION: The findings underscore the need for critical evaluation of AI tools before their adoption in dental education. While AI chatbots can support dental education, their use should be carefully guided and complemented by clinical experience, critical appraisal of information sources, and academic oversight to ensure professional competence and responsible integration into learning processes. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12903-025-07387-z.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。