Evaluation of the performance of large language models in endoscopic lumbar surgery: a comparative analysis

大型语言模型在内镜腰椎手术中的性能评估:一项比较分析

阅读:1

Abstract

OBJECTIVE: This study aimed to evaluate and compare the performance of three large language models (LLMs)-ChatGPT o1-preview, Claude 3.5 Sonnet, and Gemini 1.5 Pro-in providing information on endoscopic lumbar surgery based on 10 frequently asked patient questions. METHODS: The 10 high-frequently asked patient questions about endoscopic lumbar surgery were selected through discussion among authors. These questions were then submitted to the three LLMs. Responses were evaluated by five spine surgeons using a 5-point Likert scale for overall quality, text readability, content relevance, and humanistic care. Additionally, five non-medical volunteers assessed the understandability and satisfaction of the responses. RESULTS: The intraclass correlation coefficients of ChatGPT o1-preview, Claude 3.5 Sonnet, and Gemini 1.5 Pro of the five evaluators were 0.522, 0.686, and 0.512, respectively. Claude 3.5 Sonnet received the highest scores for overall quality (4.86 ± 0.35, P <0.001), text readability (4.91 ± 0.32, P <0.001), and content relevance (4.78 ± 0.42, P <0.001). ChatGPT o1-preview was the most approved by non-medical background volunteers (49%), followed by Gemini 1.5 Pro (29%) and Claude 3.5 Sonnet (22%). CONCLUSION: From the perspective of professional surgeons, Claude 3.5 Sonnet provided the highest quality and most relevant information. However, ChatGPT o1-preview was more understandable and satisfactory for non-professional users. This study not only highlights the potential of LLMs in patient education but also emphasizes the need for careful consideration of their role in medical practice, including technical limitations and ethical issues.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。