Abstract
Artificial intelligence (AI) systems are now prevalent in our daily lives and hold promise for transforming high-stakes fields such as healthcare. Medical AI systems are showing significant potential to support diagnostics and treatment recommendations. As these systems play an increasingly significant role in clinical decision-making, ensuring transparency in their design, operation, and outcomes is essential for building trust among key stakeholders, including patients, providers, developers, and regulators. However, many systems still function as "black boxes," making it challenging for users-such as clinicians, patients, and other stakeholders-to interpret and verify their inner workings. Here, we examine the current state of transparency in medical AIs, identifying key challenges and risks these opaque systems pose. After motivating the need for transparency in all aspects of the machine learning pipeline, from training data to model development to model deployment, we explore a range of techniques that promote explainability throughout the pipeline while highlighting the importance of continual monitoring and system updates to ensure that AI systems remain reliable over time. Finally, we address the need to overcome barriers that inhibit the integration of transparency tools into clinical settings and review regulatory frameworks that prioritize transparency in emerging AI systems. Through this survey, we aim to increase awareness of current challenges and offer actionable insights for stakeholders, such as researchers, clinicians, and regulators, on how to build trustworthy and ethically responsible AI healthcare solutions.