Abstract
The electrocardiogram (ECG) is a central tool in cardiovascular diagnostics, yet interpretation requires expertise and remains subject to variability. Multimodal large language models (MLLMs) have shown emerging capabilities in medical image analysis, but their performance in ECG interpretation remains insufficiently characterized. This study evaluated the diagnostic accuracy and inter-run reliability of five MLLMs across ECG interpretation tasks. Thirteen standard 12-lead ECGs were presented to five models (ChatGPT-5.3, Gemini 3.1 Pro, Claude Opus 4.6, Grok 4.1, and ERNIE 5.0) across five independent runs per case, yielding 2275 task-level assessments. Six categorical interpretation tasks (rhythm, electrical axis, PR/P-wave morphology, QRS duration, ST/T-wave morphology, and QTc interval) were compared with expert-consensus ground truth, while heart rate estimation was evaluated using mean absolute error (MAE). Overall categorical accuracy ranged from 52.3% to 64.9%. QRS duration classification achieved the highest accuracy (66.2–90.8%), whereas ST/T-wave assessment showed the lowest performance (20.0–41.5%). Heart rate MAE ranged from 14.8 to 46.7 bpm. A dissociation between diagnostic accuracy and inter-run reliability was observed across models. These findings indicate that current MLLMs do not achieve clinically reliable ECG interpretation performance and highlight the importance of assessing diagnostic accuracy and inter-run reliability when evaluating artificial intelligence systems in biomedical diagnostics.