Abstract
BACKGROUND: Large language model (LLM)-based chatbots, such as ChatGPT and Gemini, have become widely used sources of medical information. No study has assessed the performance of LLM chatbots in providing clinically reliable information on high tibial osteotomy (HTO). PURPOSE: To evaluate the accuracy and relevance of different LLM chatbots in responding to frequently asked questions (FAQs) about HTO. STUDY DESIGN: Cross-sectional study. METHODS: A total of 35 FAQs about HTO were curated from online sources and categorized into 6 categories: general/procedure related, indications for surgery and outcomes, risks and complications of surgery, pain and postoperative recovery, specific activities after surgery, and alternatives to and variations of HTO. These questions were used as input to 5 different LLM chatbots: ChatGPT-3.5, ChatGPT-4, ChatGPT-4 Omni, Gemini Advanced and Gemini 1.5. Responses were collected from July 12 to 14, 2024 (ChatGPT-3.5, ChatGPT-4, ChatGPT-4 Omni, and Gemini Advanced) and on September 26, 2024 (Gemini 1.5). Two independent orthopaedic surgeons assessed the responses using a 5-point Likert scale (1 = very incorrect/very irrelevant, 5 = very accurate/very relevant). Responses were anonymized to blind evaluators to chatbot identities. Differences in accuracy among chatbots were assessed using analysis of variance, and differences in relevance using the Kruskal-Wallis test. RESULTS: LLM chatbots demonstrated the following mean accuracy scores: GPT-3.5 (4.66 ± 0.64), GPT-4 (4.66 ± 0.54), GPT-4 Omni (4.94 ± 0.24), and Gemini 1.5 (4.86 ± 0.36), while Gemini Advanced showed significantly lower scores (3.83 ± 1.40) (P < .001) in answering HTO-related FAQs. Particularly, Gemini Advanced exhibited lower accuracy scores in the categories of indications and outcomes (P = .002) and alternatives and variations (P = .015). There were no significant differences among the models regarding general/procedure related (P = .12), risks and complications (P = .50), pain and postoperative recovery (P = .53), and specific activities after surgery (P = .09). All models provided relevant answers to all questions (35/35; 100%), except for Gemini Advanced (30/35; 85.7%). CONCLUSION: This study showed that ChatGPT-3.5, ChatGPT-4, ChatGPT-4 Omni, and Gemini 1.5 provided accurate and relevant responses on HTO, whereas Gemini Advanced exhibited limitations and underperformed in comparison with the other models.