Abstract
Background Artificial intelligence (AI) and large language models (LLMs) are increasingly integrated into medicine, yet their role in clinical decision-making remains underexplored. Neurology provides an important testing ground for these systems because neurological diagnoses often involve complex, overlapping syndromes, subtle clinical distinctions, and time-sensitive decision-making that can significantly affect patient outcomes. Objective This study evaluated the diagnostic and therapeutic reasoning of three LLMs: ChatGPT 4.0, Google Gemini, and Claude 3.5 Sonnet, using a complex neurological case of acute Guillain-Barré syndrome (GBS) in the setting of chronic spinal epidural lipomatosis (SEL). Methods Each model was prompted with an identical case and instructed to generate both a diagnostic assessment and a treatment plan. Four blinded, board-certified neurologists assessed outputs using a standardized rubric across four domains: diagnostic accuracy, differential diagnoses, intervention plan, and monitoring/follow-up. Results All three models correctly identified GBS and proposed guideline-consistent therapies (IVIG, plasmapheresis). Claude 3.5 Sonnet achieved the highest mean total score (18.5/20), followed by ChatGPT (17.5) and Google Gemini (17.25). Domain-level scoring demonstrated distinct performance patterns: Google Gemini scored highest in primary diagnosis, Claude 3.5 Sonnet achieved the only perfect mean score in differential diagnoses and led in intervention planning, while ChatGPT performed strongest in monitoring and follow-up recommendations. Although all models produced clinically appropriate assessments and plans, none matched the physician's gold standard in contextualizing SEL-specific considerations or developing detailed, longitudinal management strategies. These findings reflect descriptive reviewer assessments only; no inferential statistical testing was performed due to the study's single-case design. Conclusions These findings highlight both the promise and limitations of LLMs in supporting complex neurological decision-making. While capable of accurate diagnosis and evidence-based interventions, LLMs demonstrated gaps in personalization and long-term care planning, underscoring the need for cautious integration alongside physician expertise.