Assessment of the Efficacy of the Google Gemini 2.5 Pro Model in Solving the Polish State Specialization Exam in Pediatric Surgery

评估谷歌 Gemini 2.5 Pro 模型在解决波兰儿科外科国家专科考试中的有效性

阅读:1

Abstract

Background AI language models such as Google Gemini, OpenAI ChatGPT, and Anthropic's Claude are developing rapidly in response to the growing demand from various sectors of daily life, science, and industry. By collecting and processing extensive datasets, including medical data, they are becoming increasingly popular tools supporting not only IT specialists and programmers but also students and resident physicians in their studies and preparation for examinations, including specialization exams. Consequently, the reliability and accuracy of the information provided by these tools, i.e., AI language models, are often questioned. This concern formed the basis of the present study, which verified the utility of the Google Gemini 2.5 Pro model using the Polish State Specialization Examination (PES) in Pediatric Surgery. Objective The objective of this study was to assess the effectiveness and confidence levels of the Gemini 2.5 Pro model in answering PES questions, thereby evaluating its potential educational utility in the specialized surgical field of pediatric surgery. Methods The study was conducted using the most recent official PES from the spring 2025 session in pediatric surgery. The exam consisted of 120 multiple-choice questions (five options each, one correct answer). Based on previously published studies and the nature of the questions used in the PES across various medical disciplines in Poland, the questions were divided into two categories: clinical and general (theoretical). Before conducting the test, the Gemini 2.5 Pro model was presented with the PES regulations and then introduced to the examination paper containing the questions in Polish. The correctness of the solved test was verified against the official answer key from the Center for Medical Examinations (CEM) in Łódź. Additionally, the AI model was instructed to rate its confidence in each answer on a five-point scale (from 1 = no confidence to 5 = full confidence). The data obtained were analyzed statistically using the chi-squared test and the Mann-Whitney U test. Results The Google Gemini 2.5 Pro model achieved 103 correct answers, corresponding to an overall effectiveness of 85.83%, which is well above the 60% passing threshold. For subgroup analysis, the questions were divided into clinical and general categories, with the model scoring 83% and 91% correct answers, respectively. This difference was not statistically significant (p = 0.417), and the effect size (Cohen's h = 0.19) indicated a small effect. Furthermore, the model's confidence ratings showed that correct answers were generally given with higher confidence, while incorrect ones were associated with lower confidence. This suggests a positive correlation between confidence and accuracy, particularly for general questions. However, due to limited data, the exact effect size of this relationship could not be determined. Conclusions Gemini 2.5 Pro's strong performance on the PES demonstrates the considerable potential of advanced AI models in supporting medical education, even in highly specialized fields such as pediatric surgery. The observed association between correctness and declared confidence may help users gauge the reliability of AI-generated responses. Nevertheless, high performance in an examination setting does not eliminate the need for verification and critical evaluation of AI-generated answers in real-world clinical and educational applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。