Abstract
Aim Artificial intelligence (AI) has proven tremendous potential in improving diagnostic accuracy and efficiency in radiology. This study assesses the diagnostic performance of Google Gemini (version 1.5 Flash; Google DeepMind, Mountain View, California, USA), a proprietary large language model, in interpreting challenging diagnostic cases from the American Journal of Neuroradiology's (AJNR) "Case of the Month" series. Materials and methods We analyzed 143 neuroradiology cases spanning brain, head and neck, and spine areas. Each case evolved over four weeks, starting with clinical history and followed by incremental imaging findings. Google Gemini was often prompted with the question, "What is the diagnosis?" Its accuracy was assessed at each level and across specialty categories. The data used were publicly available, and no ethical approval was necessary. Results Gemini's diagnosis accuracy improved with new case data, from 3.5% with history alone to 45.7% after complete imaging was supplied. Accuracy by category was highest in spine cases (51.9%), followed by head and neck (45.5%) and brain (44.0%). A chi-square test for trend verified that the performance increase over time was statistically significant (p < 0.0000000001). Conclusion Google Gemini displays moderate diagnosis accuracy that improves with accumulated information. While encouraging, its shortcomings underline the necessity for continual validation and transparency. This study shows the expanding relevance of AI in neuroradiology and the necessity of comprehensive evaluation before clinical integration.