Abstract
BACKGROUND/OBJECTIVES: Pancreatic cancer (PC) remains among the most lethal malignancies worldwide, with a persistently low 5-year survival rate despite advances in systemic therapies and surgical innovation. Machine learning (ML) has emerged as a transformative tool for early detection, prognostic modelling, and treatment planning in PC, yet widespread clinical use is constrained by the "black box" nature of many models. Explainable artificial intelligence (XAI) offers a pathway to reconcile model accuracy with clinical trust, enabling transparent, reproducible, and clinically meaningful predictions. METHODS: We reviewed literature from 2020-2025, focusing on ML-based studies in PC that incorporated or discussed XAI techniques. Methods were grouped by model architecture, data modality, and interpretability framework. We synthesized findings to evaluate the technical underpinnings, interpretability outcomes, and clinical relevance of XAI applications. RESULTS: Across 21 studies on ML in PC, only three studies explicitly integrated XAI, primarily using SHAP and SurvSHAP. These methods helped identify key biomarkers, comorbidities, and survival predictors, while enhancing clinician trust. XAI approaches were categorized by staging (ante-hoc vs. post-hoc), compatibility (model-agnostic vs. model-specific), and scope (local vs. global explanations). Barriers to adoption included methodological instability, limited external validation, weak workflow integration, and lack of standardized evaluation. CONCLUSIONS: XAI has the potential to serve as a cornerstone for advancing transparent, trustworthy ML in PC prediction. By clarifying model reasoning, XAI enhances clinical interpretability and regulatory readiness. This review provides a technical and clinical synthesis of current XAI practices, positioning explainability as essential for translating ML innovations into actionable oncology tools.