Abstract
PURPOSE: To evaluate and compare the performance of nine contemporary LLM configurations on sleep medicine certification examination-aligned questions, analyzing version differences, pricing tiers, and subdomain competencies. METHODS: Cross-sectional comparative analysis of 197 multiple-choice questions structured according to American Academy of Sleep Medicine (AASM) certification standards. Nine LLM configurations were evaluated: ChatGPT (GPT-3.5 free, GPT-4o paid), Gemini (2.5 Flash free, 2.5 Pro paid), Claude (3.7 Sonnet previous, Opus 4 paid), Deepseek V3 (free), xAI Grok3 (free), and Llama 3 (free). Each question was posed three times in independent sessions to minimize response variance. The first complete response from each iteration was recorded, and final accuracy was determined using strict 3/3 concordance criterion (correct only when all three iterations yielded identical correct answers). While alternative scoring approaches exist (single-try accuracy, 2/3 majority voting), the strict concordance method was selected as primary metric to minimize stochastic variation and ensure robust performance estimates. Supplementary analyses using majority voting (2/3) yielded consistent model rankings with marginally higher absolute accuracy values. Performance metrics included overall accuracy rates, 95% confidence intervals, and subdomain-specific analyses across seven sleep medicine categories. Statistical analyses employed Pearson's chi-square test for heterogeneity and McNemar's test for pairwise comparisons. This text-based simulation evaluated model performance on certification-style questions, though it does not replicate actual clinical examination conditions. RESULTS: Model performance demonstrated significant heterogeneity (χ (2) = 101.95, df = 8, p < 0.001), with accuracy rates ranging from 68.5% to 95.9%. Gemini 2.5 Pro achieved the highest overall accuracy (95.9%, 95% CI: 93.2-98.7%), followed by Claude Opus 4 (93.9%, 95% CI: 90.6-97.2%) and ChatGPT GPT-4o (93.4%, 95% CI: 89.9-96.9%). Premium versions consistently demonstrated superior performance compared to free alternatives, with performance differences ranging from 5.1 to 8.6 points (all p < 0.05). Subdomain analysis revealed the highest performance consistency in Secondary Sleep Disorders (92.0% mean accuracy) and the greatest variability in Diagnostic Methods (85.9% mean accuracy). Sensitivity analysis comparing three scoring criteria (single-try ≥1/3, majority voting ≥2/3, strict concordance 3/3) revealed that scoring methodology had minimal impact on model rankings (Spearman's ρ = 0.879-1.000, all p < 0.01). Majority voting and strict concordance yielded identical accuracy rates in seven of nine models due to high response consistency (95.8% average). Eight of nine models exceeded the 80% reference benchmark under all three scoring criteria. CONCLUSION: Contemporary LLMs demonstrate substantially improved performance compared to previous evaluations, with premium models exceeding the 80% reference benchmark. However, these results reflect performance on a certification-aligned question bank rather than the official board examination itself. The significant performance advantage of paid versions raises important considerations regarding equitable access to AI-enhanced medical education and clinical decision support tools.