Abstract
BACKGROUND: Domain-specific large language models (LLMs) like Ortho GPT have potential advantages over general-purpose models in medical education, offering improved factual accuracy and contextual relevance. This study evaluates the performance of Ortho GPT against general LLMs and senior medical students on validated orthopedic examination questions. METHODS: Six LLMs (Ortho GPT 4o, ChatGPT 4o, ChatGPT 3.5, Perplexity AI, DeepSeek-R1, and Llama 3.3-70B) were tested using multiple-choice items from final-year medical student orthopedic exams in German language. Each model answered identical questions under standardized zero-shot conditions; accuracy rates and item-level results were compared using McNemar's test, Jaccard similarity, and point-biserial correlation with student difficulty ratings. RESULTS: Ortho GPT achieved the highest accuracy across models. McNemar's tests revealed the significant superiority of Ortho GPT over DeepSeek (p = 2.33 × 10(-35)), Llama 3.3-70B (p = 1.11 × 10(-32)), and Perplexity (p = 4.01 × 10(-5)). Differences between Ortho GPT and ChatGPT 4o were non-significant (p = 0.065), suggesting near-equivalent performance to the strongest general model. No LLM showed correlation with student item difficulty (|r| < 0.07, p > 0.05), indicating that models solved items independently of human-perceived difficulty. Jaccard indices suggested moderate overlap between Ortho GPT and ChatGPT 4o, but distinct response profiles compared with general LLMs. CONCLUSIONS: These findings illustrate the superiority of Ortho GPT in orthopedic exam accuracy and context relevance, attributed to its specialized training data. The domain-specific approach enables performance matching or exceeding top general LLMs in orthopedics, emphasizing the importance of domain specialization for reliable, curriculum-aligned support in medical education.