Comparative Performance of Gemini 3 Pro and GPT-5 Family Models on Ophthalmology Board-Style Questions

Gemini 3 Pro 和 GPT-5 系列模型在眼科考试题型上的性能比较

阅读:1

Abstract

OBJECTIVE: To compare the performance of state-of-the-art Gemini and GPT models on ophthalmology board-style questions and examine variation by subspecialty, cognitive complexity, and question type. DESIGN: A cross-sectional evaluation of 12 distinct large language model (LLM) configurations using a standardized ophthalmology question set. SUBJECTS: Five hundred multiple-choice questions (250 from the American Academy of Ophthalmology's Basic and Clinical Science Course [BCSC]; 250 StatPearls). METHODS: Twelve configurations of the following LLMs: Gemini 3 Pro, Gemini 2.5 Pro, GPT-5.1 Pro, GPT-5 Pro, GPT-5.2, GPT-5.1, and GPT-5, interpreted the questions using standardized prompting procedures. Questions were categorized by subspecialty, multimodal content (image vs. text-only), and cognitive complexity (first, second, or third order). Accuracy, paired discordance (McNemar tests), and one-way analysis of variance with Tukey correction were used to compare performance. Human benchmarking used BCSC percent-correct data. MAIN OUTCOME MEASURES: Overall accuracy, subspecialty accuracy, image vs. nonimage accuracy, cognitive-complexity accuracy, and paired model-level discordance. RESULTS: Model accuracy ranged from 81.4% to 94.0%. Gemini 3 Pro High Reasoning achieved the highest accuracy (94.0%), followed by Gemini 3 Pro Low Reasoning (92.4%). GPT-5.1 Pro led the GPT family (90.4%), whereas GPT-5.2 Base Model performed lowest (81.4%). Analysis of variance showed significant heterogeneity (P < 0.001), but most Tukey-corrected pairwise differences were nonsignificant. McNemar tests demonstrated significantly more correct paired responses for Gemini 3 Pro High Reasoning than for GPT-5.2 and all GPT-5/5.1 variants. Models performed markedly better on BCSC (mean 94.4%) than StatPearls (81.9%); human BCSC mean accuracy was 64.5%. Image-based items produced a 10- to 22-point accuracy decrement across all systems. Accuracy declined with increasing cognitive complexity, with the clearest separation on third-order management questions. CONCLUSIONS: Gemini 3 Pro had the best general-purpose LLM performance on ophthalmology board-style questions, providing near-perfect accuracy, while outperforming all GPT-5 family variants across domains and complexity levels. Significant deficits on image-based and third-order questions highlight persistent multimodal limitations and the need for ongoing benchmarking using challenging, clinically grounded datasets. FINANCIAL DISCLOSURES: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。