Large Language Models Show Comparable Response Performance but Vary in Readability Regarding Patient Questions on Hip Arthroscopy

大型语言模型在回答有关髋关节镜手术的患者问题时,表现出相近的响应性能,但在可读性方面存在差异。

阅读:1

Abstract

PURPOSE: To compare the quality of large language model (LLM) responses to frequently asked questions regarding hip arthroscopy, assess the incorrect response rate of LLMs, and compare the readability among different LLM outputs. METHODS: Three LLMs, including OpenAI Chat Generative Pre-Trained Transformer (ChatGPT) 3.5, Microsoft Co-Pilot, and Google Gemini, were each queried with 10 frequently asked questions regarding hip arthroscopy. Two high-volume hip arthroscopists graded the responses on a 4-point Likert scale (1 = excellent, requiring no clarification; 2 = satisfactory, requiring minimal clarification; 3 = satisfactory, requiring moderate clarification; and 4 = unsatisfactory, requiring substantial clarification). Additionally, the 2 graders ranked the responses from the 3 different LLMs for each of the 10 questions on a 3-point Likert scale (1 = best, 2 = intermediate, 3 = worst). Readability was assessed using the Flesch-Kincaid Grade Level and Flesch Reading Ease metrics. RESULTS: Commonly used LLMs performed on a similar level of response accuracy and adequacy (mean ± SD: ChatGPT: 3.0 ± 1.0 vs Microsoft: 2.9 ± 1.1 vs Gemini: 2.6 ± 1.1, P = .481). Reviewers had no preference for one LLM's responses over another (mean ± SD: ChatGPT: 2.0 ± 0.8 vs Microsoft: 2.1 ± 0.9 vs Gemini: 2.0 ± 0.8, P = .931). The overall incorrect response rate among LLMs was 20%. ChatGPT responses were at a significantly worse reading level compared to Gemini and Microsoft outputs (Flesch-Kincaid Grade Level mean ± SD: ChatGPT: 11.0 ± 2.2 grade reading level vs Microsoft: 8.6 ± 2.3 vs Gemini: 6.6 ± 2.2, P = .003; Flesch Reading Ease mean ± SD: ChatGPT: 36.6 ± 19.0 vs Microsoft: 57.7 ± 13.3 vs Gemini: 65.0 ± 4.7, P = .001). CONCLUSIONS: Hip arthroscopists find LLM outputs on patient questions regarding hip arthroscopy satisfactory but requiring moderate clarification and show no preference for one LLM's responses over another. LLMs produce a substantial number of incorrect responses. ChatGPT outputs had a significantly worse reading level compared to those of Microsoft and Gemini. CLINICAL RELEVANCE: This study provides insights into the accuracy and readability of LLM-generated responses to commonly asked questions about hip arthroscopy. As patients increasingly turn to artificial intelligence tools for health information, understanding the quality and potential risks of misinformation becomes essential.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。