Abstract
BACKGROUND: This study explored the diagnostic accuracy of artificial intelligence (AI) chatbots and dental students when responding to questions related to pulpal and periapical diseases. Rapid advancements in AI have led to increased interest in their applicability to clinical education and decision-making in dentistry. OBJECTIVE: To compare the accuracy rates of responses given by dental students and various AI-based chatbots (ChatGPT-3.5, ChatGPT-4o, Gemini, and Microsoft Copilot) to multiple-choice questions designed to assess knowledge related to pulpal and periapical diseases. METHODS: The study included third- and fifth-year dental students representing different levels of clinical training, along with four distinct AI-based chatbots. A total of 327 responses were collected from students, while each chatbot generated 450 responses. The evaluation was based on 15 multiple-choice questions developed in accordance with the 2020 version of the American Association of Endodontists (AAE) clinical guidelines. The accuracy rates of the groups were compared using descriptive statistics, one-way ANOVA, Bonferroni post hoc tests for significant differences, and Chi-square tests for correct versus incorrect response ratios. RESULTS: The highest accuracy rate was observed among fifth-year dental students (85.1%), followed by ChatGPT-4o (79.6%), ChatGPT-3.5 (75.1%), Gemini (71.6%), third-year students (64.9%), and Microsoft Copilot (61.3%). A statistically significant difference was found among the groups (p < 0.05). ChatGPT-4o demonstrated a comparable accuracy rate to fifth-year students with more clinical experience (p > 0.05), whereas other chatbots and third-year students showed lower performance. CONCLUSION: Chatbots exhibited varying levels of accuracy in diagnosing pulpal and periapical diseases. ChatGPT-4o performed at a level similar to that of more clinically experienced students, suggesting its potential as a supportive tool in dental education and clinical decision support systems. However, the relatively lower accuracy rates of models such as Gemini and Microsoft Copilot underscore the continued importance of human expertise. These findings suggest that while AI systems may serve as complementary tools in education, they cannot fully replace clinical judgment grounded in human experience.