Abstract
OBJECTIVE: To compare the performance of state-of-the-art Gemini and GPT models on ophthalmology board-style questions and examine variation by subspecialty, cognitive complexity, and question type. DESIGN: A cross-sectional evaluation of 12 distinct large language model (LLM) configurations using a standardized ophthalmology question set. SUBJECTS: Five hundred multiple-choice questions (250 from the American Academy of Ophthalmology's Basic and Clinical Science Course [BCSC]; 250 StatPearls). METHODS: Twelve configurations of the following LLMs: Gemini 3 Pro, Gemini 2.5 Pro, GPT-5.1 Pro, GPT-5 Pro, GPT-5.2, GPT-5.1, and GPT-5, interpreted the questions using standardized prompting procedures. Questions were categorized by subspecialty, multimodal content (image vs. text-only), and cognitive complexity (first, second, or third order). Accuracy, paired discordance (McNemar tests), and one-way analysis of variance with Tukey correction were used to compare performance. Human benchmarking used BCSC percent-correct data. MAIN OUTCOME MEASURES: Overall accuracy, subspecialty accuracy, image vs. nonimage accuracy, cognitive-complexity accuracy, and paired model-level discordance. RESULTS: Model accuracy ranged from 81.4% to 94.0%. Gemini 3 Pro High Reasoning achieved the highest accuracy (94.0%), followed by Gemini 3 Pro Low Reasoning (92.4%). GPT-5.1 Pro led the GPT family (90.4%), whereas GPT-5.2 Base Model performed lowest (81.4%). Analysis of variance showed significant heterogeneity (P < 0.001), but most Tukey-corrected pairwise differences were nonsignificant. McNemar tests demonstrated significantly more correct paired responses for Gemini 3 Pro High Reasoning than for GPT-5.2 and all GPT-5/5.1 variants. Models performed markedly better on BCSC (mean 94.4%) than StatPearls (81.9%); human BCSC mean accuracy was 64.5%. Image-based items produced a 10- to 22-point accuracy decrement across all systems. Accuracy declined with increasing cognitive complexity, with the clearest separation on third-order management questions. CONCLUSIONS: Gemini 3 Pro had the best general-purpose LLM performance on ophthalmology board-style questions, providing near-perfect accuracy, while outperforming all GPT-5 family variants across domains and complexity levels. Significant deficits on image-based and third-order questions highlight persistent multimodal limitations and the need for ongoing benchmarking using challenging, clinically grounded datasets. FINANCIAL DISCLOSURES: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.