Enhancing Clinician Trust in AI Diagnostics: A Dynamic Framework for Confidence Calibration and Transparency

增强临床医生对人工智能诊断的信任:信心校准和透明度的动态框架

阅读:1

Abstract

Background: Artificial Intelligence (AI)-driven Decision Support Systems (DSSs) promise improvements in diagnostic accuracy and clinical workflow efficiency, but their adoption is hindered by inadequate confidence calibration, limited transparency, and poor alignment with real-world decision processes, which limit clinician trust and lead to high override rates. Methods: We developed and validated a dynamic scoring framework to enhance trust in AI-generated diagnoses by integrating AI confidence scores, semantic similarity measures, and transparency weighting into the override decision process using 6689 cardiovascular cases from the MIMIC-III dataset. Override thresholds were calibrated and validated across varying transparency and confidence levels, with override rate as the primary acceptance measure. Results: The implementation of this framework reduced the override rate to 33.29%, with high-confidence predictions (90-99%) overridden at a rate of only 1.7%, and low-confidence predictions (70-79%) at a rate of 99.3%. Minimal transparency diagnoses had a 73.9% override rate compared to 49.3% for moderate transparency. Statistical analyses confirmed significant associations between confidence, transparency, and override rates (p < 0.001). Conclusions: These findings suggest that enhanced transparency and confidence calibration can substantially reduce override rates and promote clinician acceptance of AI diagnostics. Future work should focus on clinical validation to optimize patient safety, diagnostic accuracy, and efficiency.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。