Comparative Evaluation of Deep-Reasoning Large Language Models for Ophthalmic Emergencies

眼科急症深度推理大型语言模型的比较评估

阅读:2

Abstract

PURPOSE: To evaluate contemporary deep-reasoning large language models (LLMs) for early assessment of ophthalmic emergencies using sequential, workflow-mimicking information levels. DESIGN: Cross-sectional, vignette-based, head-to-head comparative evaluation. SUBJECTS: Thirty-four deidentified emergency ophthalmology teaching cases curated from a publicly accessible repository. METHODS: Each case was reconstructed into 3 sequential information levels (level 1 [L1]: history; level 2: basic examination; level 3 [L3]: specialist examination). Six LLMs (Doubao, DeepSeek, Kimi-2, ChatGPT-5, Gemini-3, and Grok-4), operating in deep-reasoning mode, generated outputs that were independently scored by 2 ophthalmologists. Diagnoses were graded as fully correct, partially correct, or incorrect; triage category (typical vs. atypical emergency) was rated as correct or incorrect. Ancillary test recommendations were mapped to a prespecified 10-category taxonomy and classified as undertesting, exact match, or overtesting. A four-level composite outcome integrated diagnostic correctness, triage accuracy, and testing. MAIN OUTCOME MEASURES: Diagnostic correctness (fully correct, partially correct, incorrect), triage-category accuracy, ancillary test recommendation patterns, and composite outcome (ideal, safe but overtesting, potentially dangerous, intermediate). RESULTS: Across 612 model-case-level outputs, 46.9% of diagnoses were fully correct, 24.5% partially correct, and 28.6% incorrect. Fully correct diagnoses increased from 43.1% at L1 to 53.9% at L3 (P = 0.048). Overall triage category accuracy was 85.3% (range, 76.5%-94.1% across models; P = 0.003) and did not differ across information levels (P = 0.89). Ancillary test recommendations most commonly reflected undertesting (51.0%), followed by overtesting (27.5%) and exact matches (21.6%) (P < 0.001 across models). In generalized estimating equation pairwise comparisons, ChatGPT-5 showed higher odds of a fully correct diagnosis than DeepSeek (odds ratio [OR], 3.54; 95% confidence interval [CI], 1.49-8.43) and Gemini-3 (OR, 2.24; 95% CI, 1.31-3.83), and lower odds of potentially dangerous composite outcomes than DeepSeek (OR, 0.28; 95% CI, 0.10-0.74) and Gemini-3 (OR, 0.31; 95% CI, 0.11-0.89). CONCLUSIONS: Deep-reasoning LLMs demonstrated high triage category accuracy and moderate diagnostic performance for ophthalmic emergencies, with diagnostic correctness improving at higher information levels. However, ancillary testing patterns varied substantially, and ideal composite safety profiles were uncommon, supporting cautious, supervised deployment with explicit guardrails governing workup recommendations. FINANCIAL DISCLOSURES: The author has no/the authors have no proprietary or commercial interest in any materials discussed in this article.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。