Artificial Intelligence in Plastic Surgery Education: A Global Multimodel Benchmark of Large Language Models on the Plastic Surgery In-Service Training Examination

人工智能在整形外科教育中的应用:基于大型语言模型的整形外科在职培训考试全球多模型基准测试

阅读:3

Abstract

BACKGROUND: Large language models (LLMs) are increasingly utilized in plastic surgery education. Previous studies have shown that flagship models can achieve high scores on medical examinations, including the Plastic Surgery In-Service Training Examination (PSITE). Yet evaluations often rely on single-shot accuracy of proprietary systems, neglecting stochastic variability and open-source or non-US alternatives. OBJECTIVES: The aim of this study was to comprehensively benchmark a globally representative cohort of 14 LLMs on the PSITE, assessing not only accuracy but also inter-run reliability and stochastic variability and to evaluate their role as educational tools in plastic surgery training. METHODS: A cross-sectional study evaluated 7 proprietary and 7 open-source models using 100 text-based PSITE questions from the 2017-2018 examinations. Each model underwent 5 independent runs (n = 7000 evaluations). Performance metrics included mean accuracy (%), Fleiss' kappa (κ) for reliability, and the coefficient of variation (CV) for stability. Stratified analyses assessed performance across clinical domains, proprietary vs open-source architectures, and paid vs free subscription tiers. RESULTS: Claude Opus 4.5 (Anthropic, San Francisco, CA) (90.2%) and GPT-5.2 Pro (Open AI, San Francisco, CA) (87.0%) achieved the highest accuracy. Proprietary models significantly outperformed open-source alternatives (mean 76.1% vs 60.2%) and demonstrated superior reliability (κ = 0.84 vs κ = 0.70). Stability varied, ranging from consistent error in Falcon H1 (CV = 0.00%) to erratic instability in Mistral Medium (Mistral AI, Paris, France) (CV = 32.2%). CONCLUSIONS: Contemporary LLMs possess substantial plastic surgery knowledge, yet meaningful disparities in reliability persist. Although proprietary models currently demonstrate superior reliability as educational tools, the presence of stochastic instability necessitates cautious adoption. Accuracy alone is insufficient to judge clinical utility; stability metrics are essential for selecting AI tools in surgical education.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。