Uncertainty quantification and explainable AI in orthopaedic imaging: A timely call to action

骨科影像中的不确定性量化和可解释人工智能:及时的行动呼吁

阅读:1

Abstract

Artificial intelligence (AI) has made a big leap in orthopaedic imaging, with deep learning models achieving remarkable accuracy in tasks such as knee osteoarthritis classification and grading, fracture detection, and implant assessment. Yet accuracy in AI models alone is insufficient for clinical trust, adoption, and uptake. Orthopaedic decision-making often carries high risk settings, where any misclassification or overconfidence can have significant consequences for treatment recommendations and patient outcomes. Despite this reality, most current AI models operate as "close boxes", providing predictions without clarifying their reasoning or quantifying uncertainty. This forum article argues that the integration of uncertainty quantification and explainable AI is no longer optional, but a timely call to action for the orthopaedic community. Uncertainty quantification methods can highlight when predictions are unreliable, prompting confirmatory testing or human oversight, while explainable AI techniques provide transparency into model reasoning, enabling surgeons and radiologists to better interpret AI outputs. Together, these advances are essential components of trustworthy AI, bridging the gap between technical innovation and real-world orthopaedic practice. By embracing uncertainty-aware and explainable AI models, orthopaedic imaging can move beyond accuracy toward accountability, responsibility, and safer integration into clinical workflows. The time to act is now.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。