Abstract
BACKGROUND: Artificial intelligence (AI) has the potential to enhance objectivity and scalability in educational assessment, yet its role in evaluating technical dental skills remains unclear. This study aimed to compare ChatGPT-4o based assessments with expert evaluations in undergraduate endodontic training and to explore student perceptions of AI-assisted feedback. METHODS: This cross-sectional pilot study was conducted during the 2024-2025 academic year with 32 dental students from a faculty of dentistry, who completed root canal treatments. Postoperative radiographs were evaluated by 10 years experienced endodontist and ChatGPT-4o was used to evaluate performance based on five standardized criteria: canal centering, homogeneity, procedural errors, apical shaping, and overall taper, each rated on a 5-point Likert scale. Inter-rater reliability was assessed via intraclass correlation coefficients (ICC), and Pearson correlation tested linear alignment. Students rated both feedback sources via Likert-scale questionnaires and open-ended comments; paired-sample t-tests compared the mean scores. RESULTS: Agreement between AI and expert evaluation was limited, with ICC ranging from 0.36 to 0.45, indicating poor to moderate reliability. Pearson r values were < 0.3 and not statistically significant, demonstrating weak linear correlation. While students rated AI-generated feedback as moderately useful, expert feedback scored higher across educational value (mean 4.29 vs. 3.90), clinical reasoning support (4.19 vs. 4.06), and reliability (4.00 vs. 3.91); differences were not statistically significant. Notably, 53.1% of students preferred a combination of AI and expert feedback for optimal learning. CONCLUSIONS: AI-generated feedback was moderately useful to students, but expert feedback consistently scored higher. For complex psychomotor skills and radiographic interpretation, AI should serve as an auxiliary tool rather than an independent assessor. Further validation of advanced multimodal AI systems and development of hybrid frameworks combining algorithmic objectivity with expert judgment are recommended.