Abstract
BACKGROUND: The integration of artificial intelligence (AI) tools like ChatGPT in dental education is increasing, yet their accuracy, reasoning quality, and reliability remain underexplored in specialized fields like prosthodontics. This study aimed to evaluate the performance of ChatGPT in answering prosthodontics-based questions by comparing its accuracy with that of experienced Prosthodontists, as well as assessing its repeatability and reasoning ability. MATERIAL AND METHODS: A cross-sectional observational study was conducted using 36 validated prosthodontics-based questions, categorized by difficulty (easy, medium, hard) and type (theoretical, clinical). Responses were obtained from a panel of Prosthodontists via Google Form and from ChatGPT 4-o mini version, twice daily for 15 days. Each group generated 1080 responses. Accuracy of ChatGPT's responses was compared with Prosthodontists' responses. ChatGPT's reliability was assessed using Intraclass Correlation Coefficient (ICC), Standard Error of Measurement (SEM), and Coefficient of Variation (CV). Five subject matter experts rated ChatGPT's reasoning quality on a 3-point Likert scale, and Pearson correlation was used to analyze the relationship between reasoning and accuracy. RESULTS: Prosthodontists outperformed ChatGPT in overall accuracy (p < 0.05), with significant differences observed particularly for medium-difficulty and clinical questions. ChatGPT demonstrated fair reliability (ICC = 0.427), with SEM of 25.18 and CV of 61.7% indicating moderate variability. Reasoning analysis showed that 38.9% of ChatGPT's responses were rated strong, while 36.1% were rated poor. A significant positive correlation was found between reasoning quality and accuracy (r = 0.353, p = 0.035). CONCLUSIONS: ChatGPT demonstrates moderate ability in delivering accurate theoretical information but lacks consistency and clinical judgment. Its role should be limited to a supplementary aid in dental education, with expert oversight required to ensure accuracy and contextual relevance.