Abstract
BACKGROUND/PURPOSE: This study aimed to compare the performance of seven leading large language models (Gemini 2.5 Pro, Grok-4, GPT-5, Claude-4, Copilot, Perplexity, and GPT-4o) on pediatric dentistry questions from the Turkish Dentistry Specialization Examination (DUS), and to identify differences in their performance on information-based versus case-based question types. MATERIALS AND METHODS: Seven large language models (Gemini 2.5 Pro, Grok-4, GPT-5, Claude-4, Copilot, Perplexity, and GPT-4o) were evaluated on 127 multiple-choice questions from the DUS pediatric dentistry question bank (2012-2021), classified by experts as information-based (n = 96) and case-based (n = 31). Questions were input in Turkish without modification, and responses were assessed against official answer keys. RESULTS: Significant differences were observed in overall accuracy rates (p < 0.001). The highest overall accuracy was recorded for Gemini 2.5 Pro (94.5%; 120/127), while the lowest performance was seen with GPT-4o (63.0%; 80/127). For information-based questions, Gemini answered 92/96 correctly (95.8%) and GPT-4o 66/96 (68.7%); for case-based questions, Gemini answered 28/31 correctly (90.3%) and Perplexity 5/31 (16.1%). Pairwise Wilcoxon comparisons statistically supported Gemini's significant superiority over many models and the notably weak performance of GPT-4o and Perplexity on case-based questions (p < 0.001). CONCLUSIONS: LLMs can serve as effective "co‑pilots" for information retrieval and exam preparation in dental education but are currently unreliable for diagnostic and treatment decision‑making. Clinicians and students should use LLM outputs for review and learning while retaining final decisions based on professional experience, ethical responsibility, and patient‑centered judgment. Future research should evaluate and enhance LLMs' multimodal and visual‑data processing capabilities to improve clinical applicability.