Evaluation of ChatGPT's Accuracy, Repeatability, and Reasoning Ability in Prosthodontics Education: A Cross-Sectional Comparative Study with Prosthodontists

ChatGPT在修复学教育中的准确性、可重复性和推理能力评估:一项与修复医师的横断面比较研究

阅读:1

Abstract

BACKGROUND: The integration of artificial intelligence (AI) tools like ChatGPT in dental education is increasing, yet their accuracy, reasoning quality, and reliability remain underexplored in specialized fields like prosthodontics. This study aimed to evaluate the performance of ChatGPT in answering prosthodontics-based questions by comparing its accuracy with that of experienced Prosthodontists, as well as assessing its repeatability and reasoning ability. MATERIAL AND METHODS: A cross-sectional observational study was conducted using 36 validated prosthodontics-based questions, categorized by difficulty (easy, medium, hard) and type (theoretical, clinical). Responses were obtained from a panel of Prosthodontists via Google Form and from ChatGPT 4-o mini version, twice daily for 15 days. Each group generated 1080 responses. Accuracy of ChatGPT's responses was compared with Prosthodontists' responses. ChatGPT's reliability was assessed using Intraclass Correlation Coefficient (ICC), Standard Error of Measurement (SEM), and Coefficient of Variation (CV). Five subject matter experts rated ChatGPT's reasoning quality on a 3-point Likert scale, and Pearson correlation was used to analyze the relationship between reasoning and accuracy. RESULTS: Prosthodontists outperformed ChatGPT in overall accuracy (p < 0.05), with significant differences observed particularly for medium-difficulty and clinical questions. ChatGPT demonstrated fair reliability (ICC = 0.427), with SEM of 25.18 and CV of 61.7% indicating moderate variability. Reasoning analysis showed that 38.9% of ChatGPT's responses were rated strong, while 36.1% were rated poor. A significant positive correlation was found between reasoning quality and accuracy (r = 0.353, p = 0.035). CONCLUSIONS: ChatGPT demonstrates moderate ability in delivering accurate theoretical information but lacks consistency and clinical judgment. Its role should be limited to a supplementary aid in dental education, with expert oversight required to ensure accuracy and contextual relevance.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。