Can Artificial Intelligence be Successful as an Anaesthesiology and Reanimation Resident?

人工智能能否胜任麻醉和复苏科住院医师的工作?

阅读:1

Abstract

OBJECTIVE: This study aims to compare the performance of artificial intelligence (AI) chatbot ChatGPT with anaesthesiology and reanimation residents at a major hospital in an exam modelled after the European Diploma in Anaesthesiology and Intensive Care Part I. METHODS: The annual training exam for residents was administered electronically. One day prior to this, the same questions were posed to an AI language model. During the analysis, the residents were divided into two groups based on their training duration (less than 24 months: Group J; 24 months or more: Group S). Two books and four guides were used as references in the preparation of a 100-question multiple-choice exam, with each correct answer awarded one point. RESULTS: The median exam score among all participants was 70 [interquartile range (IQR) 67-73] out of 100. ChatGPT correctly answered 71 questions. Group J had a median exam score of 67 (IQR 65.25-69), while Group S scored 73 (IQR 70-75) (P < 0.001). Residents with less than 24 months of training performed significantly worse across all subtopics compared to those with more extensive training (P < 0.05). When ranked within the groups, ChatGPT placed eighth in Group J and 47(th) in Group S. CONCLUSION: ChatGPT exhibited a performance comparable to that of a resident in an exam centred on anaesthesiology and critical care. We suggest that by tailoring an AI model like ChatGPT in anaesthesiology and resuscitation, exam performance could be enhanced, paving the way for its development as a valuable tool in medical education.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。