OP1.5 Evaluating ChatGPT’s Performance in Answering Patients’ Questions Relating to Femoroacetabular Impingement Syndrome and Arthroscopic Hip Surgery

OP1.5 评估 ChatGPT 在回答患者关于股骨髋臼撞击综合征和髋关节镜手术相关问题方面的表现

阅读:1

Abstract

Background: This study evaluates the efficacy of large language models (LLMs) like ChatGPT in providing accurate and reliable patient information on Femoroacetabular Impingement (FAI) syndrome and its arthroscopic management. The advent of AI and LLMs has revolutionized the accessibility of medical information, necessitating an examination of their reliability and accuracy. With the known preponderant reliance on online resources for medical information, this research aims to assess the precision of ChatGPT responses to common patient inquiries about FAI and its surgical treatment. Hence, this project’s primary goal was to ascertain the overall accuracy and reliability of ChatGPT-generated information, with a secondary aim of comparing the performance between ChatGPT versions 3.5 and 4.0. Methods: Utilizing a set of twelve frequently asked questions about FAI, collected from scientific literature and reputable healthcare websites, the study evaluated and compared responses from ChatGPT versions 3.5 and 4.0. These responses were evaluated in a blinded fashion by three experienced hip arthroscopy surgeons using a previously published ChatGPT Response Rating System, ranging from “excellent response not requiring clarification” to “unsatisfactory requiring substantial clarification.” A descriptive quantitative and qualitative analysis was conducted. A Wilcoxon signed-rank test was used to compare the paired groups (GPT 3.5 versus GPT 4.0) and Gwet’s AC2 coefficient was used to assess the weighted level of agreement, corrected for chance, employing quadratic weights. Results: The findings indicated that both ChatGPT versions predominantly produced responses that were either “excellent” or “satisfactory requiring minimal clarification”, representing 75% and 92% of the responses for ChatGPT 3.5 and 4.0 respectively. The median accuracy scores were 2 (range 1-3) and 1.5 (range 1-3) for ChatGPT 3.5 and ChatGPT 4.0, respectively. No response was judged “unsafe or requiring substantial clarification” by the experts. However, no significant statistical difference was found between the two versions (p=0.279), although ChatGPT-4 showed a tendency towards higher accuracy in some areas. Conclusion: ChatGPT demonstrates a promising capacity to provide accurate and helpful information on FAI syndrome and its treatment, with both versions performing to satisfaction. This research underscores the importance of ongoing evaluation and refinement of AI tools in healthcare, ensuring their reliability and effectiveness in patient education and support.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。