Abstract
INTRODUCTION: Alzheimer's disease (AD) is a major global health concern, expected to affect 12.7 million Americans by 2050. Machine learning (ML) algorithms have been developed for AD diagnosis and progression prediction, but the lack of racial diversity in clinical datasets raises concerns about their generalizability across demographic groups, particularly underrepresented populations. Studies show ML algorithms can inherit biases from data, leading to biased AD predictions. METHODS: This study investigates the fairness of ML models in AD diagnosis. We hypothesize that models trained on a single racial group perform well within that group but poorly in others. We employ feature selection and model training techniques to improve fairness. RESULTS: Our findings support our hypothesis that ML models trained on one group underperform on others. We also demonstrated that applying fairness techniques to ML models reduces their bias. DISCUSSION: This study highlights the need for racial diversity in datasets and fair models for AD prediction.