Performance Evaluation of GPT-4o and o1-Preview Using the Certification Examination for the Japanese 'Operations Chief of Radiography With X-rays'

利用日本“X射线放射科主任”认证考试对GPT-4o和o1-Preview进行性能评估

阅读:1

Abstract

Purpose The purpose of this study was to assess the ability of large language models (LLMs) to comprehend the safety management, protection methods, and proper handling of X-rays according to laws and regulations. We evaluated the performance of GPT-4o (OpenAI, San Francisco, CA, USA) and o1-preview (OpenAI) using questions from the 'Operations Chief of Radiography With X-rays' certification examination in Japan. Methods This study engaged GPT-4o and o1-preview in responding to questions from this Japanese certification examination for 'Operations Chief of Radiography With X-rays'. A total of four sets of exams published from April 2023 to October 2024 were used. The accuracy of each model was evaluated across the subjects, including knowledge about the control of X-rays, relevant laws and regulations, knowledge about the measurement of X-rays, and knowledge about the effects of X-rays on organisms. The results were compared between the two models, excluding graphical questions due to o1-preview's inability to interpret images. Results The overall accuracy rates of GPT-4o and o1-preview ranged from 57.5% to 70.0% and from 71.1% to 86.5%, respectively. The GPT-4o achieved passing accuracy rates in the subjects except for relevant laws and regulations. In contrast, o1-preview met the passing criteria across all four sets, despite graphical questions being excluded from scoring. The accuracy of all questions and relevant laws and regulations in o1-preview were significantly higher than those in GPT-4o (p = 0.03 for all questions and p = 0.03 for relevant laws and regulations, respectively). No significant differences in accuracy were found across the other subjects. Conclusions In the Japanese 'Operations Chief of Radiography With X-rays' certification examination, GPT-4o demonstrated a competent performance in the subjects except for relevant laws and regulations, while o1-preview showed a commendable performance across all subjects. When graphical questions were excluded from scoring, the performance of o1-preview surpassed that of GPT-4o in all questions and relevant laws and regulations.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。