Abstract
OBJECTIVE: To compare the accuracy and consistency of five large language models (LLMs) in generating responses about dental trauma. MATERIALS AND METHODS: Sixty dichotomous (true/false) questions were submitted daily to each LLM (ChatGPT, Google Gemini, Microsoft Copilot, DeepSeek, and Meta AI) for 30 days, totaling 18,000 responses. All interactions were performed under two prompting conditions (zero-shot and zero-shot with context). LLM responses were compared against the International Association of Dental Traumatology (IADT) guidelines. Statistical analysis was conducted using a generalized linear mixed model (GLMM) with a binomial distribution (α = 0.05), alongside calculation of sensitivity, specificity, accuracy, and area under the ROC curve (AUC) based on the 60-item set. Temporal stability was assessed using the intraclass correlation coefficient ICC. RESULTS: All LLMs achieved accuracy above 85%, with Microsoft Copilot (91.1%) and DeepSeek (90%) performing best; no significant difference was observed between them (p > 0.05), but both outperformed the other models (p < 0.05). DeepSeek and Microsoft Copilot also showed the highest consistency over 30 days (ICC > 0.90). CONCLUSION: All evaluated LLMs, particularly Copilot and DeepSeek, demonstrated high accuracy in providing information on dental trauma, with stable performance over time. While the use of a context prompt did not significantly affect accuracy or stability.