Impact of prompting on large language model performance: ChatGPT-4 performance on the 2023 hand surgery self-assessment examination

提示对大型语言模型性能的影响:ChatGPT-4 在 2023 年手外科自我评估考试中的表现

阅读:1

Abstract

BACKGROUND: Large language models (LLMs) such as ChatGPT are artificial intelligence programs designed to interpret and respond to text based input These programs can improve output based on prompting and tailored prompt engineering. Multiple studies have assessed the ability of various LLMs to perform on medical exams at different levels of training. The newest version of ChatGPT, GPT-4, allows image recognition which is relevant for many questions on orthopedic surgery exams. Performance of GPT-4, and the potential for LLMs to learn from prior exams remains unclear. The present study analyzed ChatGPT-4 performance on the 2023 hand surgery Maintenance of Certification (MOC) Self-Assessment Examination (SAE) before and after prompting with 5 previous versions of the test. It was hypothesized that GPT-4 would pass the exam and improve performance after prompting. METHODS: GPT-4 was tested with all text and image-based questions from the 2023 hand surgery SAE. Video-based questions were excluded. GPT-4 was then provided with questions, answers, and explanations from 5 previous SAEs from 2014 to 2020 and retested on the 2023 SAE text and imaging questions. Responses from GPT-4 on prompted and unprompted tests were recorded and compared. RESULTS: Both prompted and unprompted versions of ChatGPT-4 exceeded SAE exam passing requirement of >50 % correct response rate. GPT-4 answered 67 % of all questions correctly unprompted and 71 % of all questions correctly after prompting (p = 0.51). Sub-analysis demonstrated GPT-4 answered 66 % of image-based questions correctly after prompting, compared to 56 % before prompting (p = 0.25). GPT-4 answered 75 % of text only questions correctly before prompting and 74 % correctly after prompting (p = 1.0). Fischer's exact test on total questions, image only, and text only showed no statistically significant differences between prompted and unprompted versions of GPT-4. CONCLUSION: GPT-4 demonstrated the ability to analyze orthopedic information, answer specialty-specific questions, and exceed the passing threshold of 50 % on the 2023 Hand Surgery Self-Assessment Exam. However, prompting GPT-4 with previous SAEs did not statistically significantly improve performance. With continued advancements in AI and deep learning, large language models may someday become resources in test simulation and knowledge checks in the realm of hand surgery.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。