Comparative Evaluation of Accuracy, Completeness and Readability of Common Patient Queries Related to Prosthodontic Treatment by Two Artificial Intelligence Models (ChatGPT-4o and Gemini)

两种人工智能模型(ChatGPT-4o 和 Gemini)对常见患者关于修复治疗问题的准确性、完整性和可读性进行比较评估

阅读:3

Abstract

INTRODUCTION: Artificial intelligence (AI) is fundamentally characterized by the capacity of computer systems to execute tasks that traditionally require human intelligence through the application of sophisticated algorithms. The coming years are bound to witness an increase in the number of people relying on the AI language models for initial health-related queries. The reasons may include increased credibility of AI models, convenience and privacy, initial knowledge acquisition before their visit to a doctor and guidance regarding treatment options. AIM: The aim of the study was to compare the performance of two commonly used AI models, Chat Generative Pretrained Transformer (ChatGPT-4o, OpenAI, San Francisco, United States) and Google Gemini Flash 2.5 (Google Bard, Google DeepMind, Mountain View, California, United States), in responding to common patient queries related to prosthodontic treatments and evaluate their responses in terms of accuracy, completeness, generation time, length and readability. MATERIALS AND METHODS: In this study, 30 open-ended questions that are frequently asked by patients visiting prosthodontists were collected. Ten questions each from the domains (removable (partial and complete), fixed partial dentures and dental implantology) were submitted to both AI models: ChatGPT 4o and Google Gemini Flash 2.5. Four experts in the field of Prosthodontics with a minimum of 20 years' experience, who were blinded to the study, evaluated the accuracy and completeness of the responses from the models. Accuracy, completeness, generation time, length and readability of the responses generated by the two AI models were compared using the Likert scale and Simple Measure of Gobbledygook (SMOG) Index. RESULTS: The comparison of outputs from two AI systems evaluated illustrated that Model B (Gemini) achieved 0.25 higher completeness and accuracy scores compared to Model A (ChatGPT). In terms of readability, Gemini generated outputs with a median SMOG index 1.46 points higher than ChatGPT (p = 0.004), suggesting a more advanced reading level. CONCLUSION: Gemini demonstrated superior performance compared to ChatGPT, delivering more professional and technical content, while ChatGPT was more accessible and comprehensible for non-professionals and the general public.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。