ChatGPT Yields a Passing Score on a Pediatric Board Preparatory Exam but Raises Red Flags

ChatGPT在儿科医师资格考试预备考试中获得了及格分数,但也引发了人们的担忧。

阅读:1

Abstract

OBJECTIVES: We aimed to evaluate the performance of a publicly-available online artificial intelligence program (OpenAI's ChatGPT-3.5 and -4.0, August 3 versions) on a pediatric board preparatory examination, 2021 and 2022 PREP(®) Self-Assessment, American Academy of Pediatrics (AAP). METHODS: We entered 245 questions and answer choices from the Pediatrics 2021 PREP(®) Self-Assessment and 247 questions and answer choices from the Pediatrics 2022 PREP(®) Self-Assessment into OpenAI's ChatGPT-3.5 and ChatGPT-4.0, August 3 versions, in September 2023. The ChatGPT-3.5 and 4.0 scores were compared with the advertised passing scores (70%+) for the PREP(®) exams and the average scores (74.09%) and (75.71%) for all 10 715 and 6825 first-time human test takers. RESULTS: For the AAP 2021 and 2022 PREP(®) Self-Assessments, ChatGPT-3.5 answered 143 of 243 (58.85%) and 137 of 247 (55.46%) questions correctly on a single attempt. ChatGPT-4.0 answered 193 of 243 (79.84%) and 208 of 247 (84.21%) questions correctly. CONCLUSION: Using a publicly-available online chatbot to answer pediatric board preparatory examination questions yielded a passing score but demonstrated significant limitations in the chatbot's ability to assess some complex medical situations in children, posing a potential risk to this vulnerable population.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。