Abstract
BACKGROUND: Large Language Models (LLMs) highlight their potential in supporting patient education and self-management. Their performance in responses to orthodontic questions has yet to be explored. OBJECTIVES: This study aims to compare the quality, empathy, readability, and satisfaction of responses from LLMs and search engines on common orthodontic questions. METHODS: Forty-five common orthodontic questions (six categories) and a prompt were developed, and a self-designed multidimensional evaluation questionnaire was constructed. Questions were presented to 5 LLMs and 3 search engines on December,22,2024. The primary outcomes were the median expert-rated scores of LLMs versus search engine responses on quality, empathy, readability, and satisfaction, using 5- or 10-point Likert scales. RESULTS: LLMs scored significantly higher than search engines in quality (4.00 vs. 3.50, p < 0.001), empathy (3.75 vs. 3.50, p < 0.001), readability (4.00 vs. 3.75, p < 0.001), and satisfaction (8.00 vs. 7.25, p < 0.001). LLM-generated responses were rated significantly higher than those from search engines in therapeutic outcomes category, appliance selection category and cost category. CONCLUSIONS: In this cross-sectional study, the LLMs, particularly GPT-4o, outperformed search engines. These results indicate the potential of LLMs as supplementary tools for orthodontic patient education and self-management.