Abstract
BACKGROUND: Large language models (LLMs) are increasingly utilized in plastic surgery education. Previous studies have shown that flagship models can achieve high scores on medical examinations, including the Plastic Surgery In-Service Training Examination (PSITE). Yet evaluations often rely on single-shot accuracy of proprietary systems, neglecting stochastic variability and open-source or non-US alternatives. OBJECTIVES: The aim of this study was to comprehensively benchmark a globally representative cohort of 14 LLMs on the PSITE, assessing not only accuracy but also inter-run reliability and stochastic variability and to evaluate their role as educational tools in plastic surgery training. METHODS: A cross-sectional study evaluated 7 proprietary and 7 open-source models using 100 text-based PSITE questions from the 2017-2018 examinations. Each model underwent 5 independent runs (n = 7000 evaluations). Performance metrics included mean accuracy (%), Fleiss' kappa (κ) for reliability, and the coefficient of variation (CV) for stability. Stratified analyses assessed performance across clinical domains, proprietary vs open-source architectures, and paid vs free subscription tiers. RESULTS: Claude Opus 4.5 (Anthropic, San Francisco, CA) (90.2%) and GPT-5.2 Pro (Open AI, San Francisco, CA) (87.0%) achieved the highest accuracy. Proprietary models significantly outperformed open-source alternatives (mean 76.1% vs 60.2%) and demonstrated superior reliability (κ = 0.84 vs κ = 0.70). Stability varied, ranging from consistent error in Falcon H1 (CV = 0.00%) to erratic instability in Mistral Medium (Mistral AI, Paris, France) (CV = 32.2%). CONCLUSIONS: Contemporary LLMs possess substantial plastic surgery knowledge, yet meaningful disparities in reliability persist. Although proprietary models currently demonstrate superior reliability as educational tools, the presence of stochastic instability necessitates cautious adoption. Accuracy alone is insufficient to judge clinical utility; stability metrics are essential for selecting AI tools in surgical education.