Abstract
Artificial intelligence (AI) has made a big leap in orthopaedic imaging, with deep learning models achieving remarkable accuracy in tasks such as knee osteoarthritis classification and grading, fracture detection, and implant assessment. Yet accuracy in AI models alone is insufficient for clinical trust, adoption, and uptake. Orthopaedic decision-making often carries high risk settings, where any misclassification or overconfidence can have significant consequences for treatment recommendations and patient outcomes. Despite this reality, most current AI models operate as "close boxes", providing predictions without clarifying their reasoning or quantifying uncertainty. This forum article argues that the integration of uncertainty quantification and explainable AI is no longer optional, but a timely call to action for the orthopaedic community. Uncertainty quantification methods can highlight when predictions are unreliable, prompting confirmatory testing or human oversight, while explainable AI techniques provide transparency into model reasoning, enabling surgeons and radiologists to better interpret AI outputs. Together, these advances are essential components of trustworthy AI, bridging the gap between technical innovation and real-world orthopaedic practice. By embracing uncertainty-aware and explainable AI models, orthopaedic imaging can move beyond accuracy toward accountability, responsibility, and safer integration into clinical workflows. The time to act is now.