Abstract
BACKGROUND: Data preprocessing is a significant step in machine learning to improve the performance of the model and decreases the running time. This might include dealing with missing values, outliers' removal, data augmentation, dimensionality reduction and handling the confounding variables. OBJECTIVE: This commentary explores the common preprocessing steps used in medical machine learning and highlights their conceptual hidden cost including reduced model explainability and clinical interpretability. This commentary focuses on conceptual rather than empirical implications of these preprocessing decisions. METHODS: Literature review was undertaken to explore the data preprocessing steps in machine learning. RESULTS: Although it is found the preprocessing steps improve the accuracy of the model, but they might block new findings and hinder the explainability of the model if they are not carefully considered especially in medicine. We identify key risks such as bias introduction and over simplification and outline mitigation strategies. CONCLUSION: The novelty of this work lies in systematically connecting preprocessing practices with explainability challenges in healthcare artificial intelligence and suggesting approaches to balance performance with explainability. This highlights the need for careful design of preprocessing pipelines in medical artificial intelligence systems to ensure both reliable predictions and clinical trust.