A Comparative Analysis of GPT-3.5, GPT-4, GPT-4 Omni, Gemini Advanced, and Gemini 1.5 in Answering Frequently Asked Questions Regarding High Tibial Osteotomy

GPT-3.5、GPT-4、GPT-4 Omni、Gemini Advanced 和 Gemini 1.5 在回答有关高位胫骨截骨术的常见问题方面的比较分析

阅读:1

Abstract

BACKGROUND: Large language model (LLM)-based chatbots, such as ChatGPT and Gemini, have become widely used sources of medical information. No study has assessed the performance of LLM chatbots in providing clinically reliable information on high tibial osteotomy (HTO). PURPOSE: To evaluate the accuracy and relevance of different LLM chatbots in responding to frequently asked questions (FAQs) about HTO. STUDY DESIGN: Cross-sectional study. METHODS: A total of 35 FAQs about HTO were curated from online sources and categorized into 6 categories: general/procedure related, indications for surgery and outcomes, risks and complications of surgery, pain and postoperative recovery, specific activities after surgery, and alternatives to and variations of HTO. These questions were used as input to 5 different LLM chatbots: ChatGPT-3.5, ChatGPT-4, ChatGPT-4 Omni, Gemini Advanced and Gemini 1.5. Responses were collected from July 12 to 14, 2024 (ChatGPT-3.5, ChatGPT-4, ChatGPT-4 Omni, and Gemini Advanced) and on September 26, 2024 (Gemini 1.5). Two independent orthopaedic surgeons assessed the responses using a 5-point Likert scale (1 = very incorrect/very irrelevant, 5 = very accurate/very relevant). Responses were anonymized to blind evaluators to chatbot identities. Differences in accuracy among chatbots were assessed using analysis of variance, and differences in relevance using the Kruskal-Wallis test. RESULTS: LLM chatbots demonstrated the following mean accuracy scores: GPT-3.5 (4.66 ± 0.64), GPT-4 (4.66 ± 0.54), GPT-4 Omni (4.94 ± 0.24), and Gemini 1.5 (4.86 ± 0.36), while Gemini Advanced showed significantly lower scores (3.83 ± 1.40) (P < .001) in answering HTO-related FAQs. Particularly, Gemini Advanced exhibited lower accuracy scores in the categories of indications and outcomes (P = .002) and alternatives and variations (P = .015). There were no significant differences among the models regarding general/procedure related (P = .12), risks and complications (P = .50), pain and postoperative recovery (P = .53), and specific activities after surgery (P = .09). All models provided relevant answers to all questions (35/35; 100%), except for Gemini Advanced (30/35; 85.7%). CONCLUSION: This study showed that ChatGPT-3.5, ChatGPT-4, ChatGPT-4 Omni, and Gemini 1.5 provided accurate and relevant responses on HTO, whereas Gemini Advanced exhibited limitations and underperformed in comparison with the other models.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。