Abstract
BACKGROUND AND PURPOSE: The ability of large language models (LLMs) such as ChatGPT and Gemini to respond accurately to sports surgery-related patient questions remains unknown. This study aimed to compare the responses of ChatGPT and Gemini regarding anterior cruciate ligament (ACL) and meniscal injuries with the American Academy of Orthopaedic Surgeons (AAOS) Evidence-Based Clinical Practice Guidelines (CPGs) recommendations. METHODS: We queried ChatGPT and Gemini with questions based on statements from the AAOS CPGs for ACL and meniscus injuries. Responses were classified by two reviewers as "Agree," "Neutral," or "Disagree" with the AAOS CPGs. A Cohen's kappa coefficient was used to assess interrater reliability, and chi-squared analyses were used to compare responses between LLMs. RESULTS: Of the 11 CPG recommendations that were of strong or moderate strength, ChatGPT and Gemini provided responses that were in agreement for 9 (82%) and 8 (73%) of recommendations, respectively. While both LLMs showed perfect concordance with the meniscus CPGs, there were no significant differences when comparing the ACL responses or when comparing the LLMs' responses to strong and moderate recommendation queries. ChatGPT provided zero study references, whereas Gemini provided 25 PubMed resources, of which 23 appropriately supported the claims made in the Gemini responses. CONCLUSIONS: While there is still room for growth and transparency in these large language models, providers can expect that these AI platforms generally provide information to patients that aligns with lower extremity sports surgery clinical practice guidelines.