Abstract
The convergence of artificial intelligence (AI) and drug discovery is accelerating the pace of therapeutic target identification, refining of drug candidates, and streamlining processes from laboratory research to clinical applications. Despite these promising advances, the inherent opacity of AI-driven models, especially deep-learning (DL) models, poses a significant "black-box" problem, limiting interpretability and acceptance within the pharmaceutical researchers. Explainable artificial intelligence (XAI) has emerged as a crucial solution for enhancing transparency, trust, and reliability by clarifying the decision-making mechanisms that underpin AI predictions. This review systematically investigates the principles and methodologies underpinning XAI, highlighting various XAI tools, models, and frameworks explicitly designed for drug-discovery tasks. XAI applications in healthcare are explored with an in-depth discussion on the potential role in accelerating the drug-discovery processes, such as molecular modeling, therapeutic target identification, Absorption, Distribution, Metabolism, and Excretion (ADME) prediction, clinical trial design, personalized medicine, and molecular property prediction. Furthermore, this article critically examines how XAI approaches effectively address the black-box nature of AI models, bridging the gap between computational predictions and practical pharmaceutical applications. Finally, we discuss the challenges in deploying XAI methodologies, focusing on critical research directions to improve transparency and interpretability in AI-driven drug discovery. This review emphasizes the importance of researchers staying current on evolving XAI technologies to realize their transformative potential in fully improving the efficiency, reliability, and clinical impact of drug-discovery pipelines.