Abstract
BACKGROUND: This study evaluated the performance of five Large Language Models based on actual cases to provide guidance for selecting appropriate models for clinical decision-making. OBJECTIVE: This study aimed to assess the performance of large language models (LLMs) in clinical decision-making for internal medicine and to provide evidence-based guidance for model selection in clinical practice. METHODS: We conducted a retrospective cross-sectional study with 405 cases across nine subspecialties: cardiovascular, respiratory, gastroenterology, nephrology, rheumatology, endocrinology, neurology, hematology, and infectious diseases. Two senior clinicians evaluated outputs on five dimensions: diagnosis, diagnostic criteria, differential diagnosis, examinations, and treatment. Statistical analyses were performed via the Kruskal‒Wallis tests and Pairwise comparisons were performed by Dunn's test with p-value adjusted by BH procedure. RESULTS: Overall, significant performance differences were observed among models (p = 0.001). All models performed worst in respiratory (p < 0.05). Gemini significantly outperformed others in differential diagnosis-the weakest area across all models (p < 0.05). Claude scored significantly lower than other LLMs in Card and Heme (p < 0.05). Subgroup analysis indicated that the most pronounced performance disparities were observed in the Card (p < 0.05). CONCLUSION: GPT, O1, and Gemini demonstrated superior performance in clinical decision-making for internal medicine among all LLMs, whereas Claude showed the poorest performance. All LLMs demonstrated deficiencies in differential diagnosis and poor management for respiratory diseases. The complexity of subspecialty might be a performance differentiator for LLMs and O1 might have potential suitability for complex subspecialties like cardiology.