Abstract
Background/Objectives: Artificial intelligence-based language models such as ChatGPT are increasingly used in medical communication, yet their performance compared with human clinicians remains insufficiently explored in dentistry. Because communication quality, including accuracy and empathy, is essential for patient understanding, this study aimed to compare ChatGPT's responses with those of dentists with different levels of professional experience. Methods: Ten standardized dental patient questions were generated by the authors and answered by ChatGPT and by three dentist groups (<2 years, 2-5 years, >5 years of experience; one respondent per group, randomly selected from five eligible dentists). Subsequently, 30 dentists rated the professional quality of the responses, and 50 patients evaluated perceived empathy on 4-point scales. Group differences were analyzed using the non-parametric Friedman test with exact post hoc comparisons and Bonferroni correction. Results: ChatGPT received higher ratings than all dentist groups in both domains. Mean empathy scores were 3.23 for ChatGPT versus 1.73-2.14 for dentists, and mean quality scores were 3.50 versus 1.79-2.21 (all p < 0.001). Early-career dentists scored moderately higher than the most experienced group but consistently below ChatGPT. Due to the exploratory design and small number of respondents per experience group, these findings should be interpreted cautiously. Conclusions: ChatGPT generated responses rated as more empathetic and of higher professional quality than those of participating dentists. This suggests potential value for supporting routine, text-based dental communication. However, limitations such as lack of genuine empathy, data privacy concerns, and clinical responsibility must be considered. Larger studies are needed to validate these results.