Abstract
Machine-learning (ML) models for Alzheimer's disease (AD) frequently yield divergent conclusions, raising concerns about robustness, reproducibility, and interpretability. This instability is partially driven by researcher biases and analytic variability. Coupled with the clinical heterogeneity, mixed pathologies, and cohort differences in AD research, these issues limit the reliability and validity of conclusions from individual models. We introduce AutoML-Multiverse, an instability-aware framework characterising how analytic choices influence ML-based conclusions. The AutoML-Multiverse explores a large space of ~20,000 analysis pipelines and by retaining the full distribution of pipelines, enables direct examination of analytic variability. We evaluate this framework across 20 classification tasks in two independent cohorts studying Alzheimer's disease progression (ADNI, N≤1,930; NACC, N≤1,057), using multiple data modalities: neuroimaging, clinical/cognitive and fluid biomarkers. AutoML-Multiverse performance was equal to or better than non-automated models across all tasks. For example, stable versus progressive mild cognitive impairment (MCI) classification accuracy was 0.68±0.06 (ADNI) and 0.63±0.08 (NACC), while AD versus cognitively normal (CN) classification reached 0.97±0.01 (ADNI). Crucially, each modality's utility was task- and cohort-dependent: clinical measures dominated diagnostic tasks, whereas imaging better predicted progression, with modality preferences often switching between cohorts, highlighting limited generalisability of single-cohort results. Using the AutoML-Multiverse, we obtained strong classification performance without pre-specifying key model design choices. By reducing analysis-driven variability and explicitly characterising uncertainty, instability-aware evaluation can support the development of more robust and clinically applicable prediction models in AD research.