Abstract
PURPOSE: To compare the quality of large language model (LLM) responses to frequently asked questions regarding hip arthroscopy, assess the incorrect response rate of LLMs, and compare the readability among different LLM outputs. METHODS: Three LLMs, including OpenAI Chat Generative Pre-Trained Transformer (ChatGPT) 3.5, Microsoft Co-Pilot, and Google Gemini, were each queried with 10 frequently asked questions regarding hip arthroscopy. Two high-volume hip arthroscopists graded the responses on a 4-point Likert scale (1 = excellent, requiring no clarification; 2 = satisfactory, requiring minimal clarification; 3 = satisfactory, requiring moderate clarification; and 4 = unsatisfactory, requiring substantial clarification). Additionally, the 2 graders ranked the responses from the 3 different LLMs for each of the 10 questions on a 3-point Likert scale (1 = best, 2 = intermediate, 3 = worst). Readability was assessed using the Flesch-Kincaid Grade Level and Flesch Reading Ease metrics. RESULTS: Commonly used LLMs performed on a similar level of response accuracy and adequacy (mean ± SD: ChatGPT: 3.0 ± 1.0 vs Microsoft: 2.9 ± 1.1 vs Gemini: 2.6 ± 1.1, P = .481). Reviewers had no preference for one LLM's responses over another (mean ± SD: ChatGPT: 2.0 ± 0.8 vs Microsoft: 2.1 ± 0.9 vs Gemini: 2.0 ± 0.8, P = .931). The overall incorrect response rate among LLMs was 20%. ChatGPT responses were at a significantly worse reading level compared to Gemini and Microsoft outputs (Flesch-Kincaid Grade Level mean ± SD: ChatGPT: 11.0 ± 2.2 grade reading level vs Microsoft: 8.6 ± 2.3 vs Gemini: 6.6 ± 2.2, P = .003; Flesch Reading Ease mean ± SD: ChatGPT: 36.6 ± 19.0 vs Microsoft: 57.7 ± 13.3 vs Gemini: 65.0 ± 4.7, P = .001). CONCLUSIONS: Hip arthroscopists find LLM outputs on patient questions regarding hip arthroscopy satisfactory but requiring moderate clarification and show no preference for one LLM's responses over another. LLMs produce a substantial number of incorrect responses. ChatGPT outputs had a significantly worse reading level compared to those of Microsoft and Gemini. CLINICAL RELEVANCE: This study provides insights into the accuracy and readability of LLM-generated responses to commonly asked questions about hip arthroscopy. As patients increasingly turn to artificial intelligence tools for health information, understanding the quality and potential risks of misinformation becomes essential.