Integrating Confidence, Difficulty, and Language Model Calibration for Better Explainability in Clinical Documents Coding: Applications of AI

整合置信度、难度和语言模型校准以提高临床文档编码的可解释性:人工智能的应用

阅读:1

Abstract

BACKGROUND: In recent years, there has been increasing interest in developing machine and deep learning models capable of annotating clinical documents with semantically relevant labels. However, the complex nature of these models often leads to significant challenges regarding interpretability and transparency. OBJECTIVE: This study aims to improve the interpretability of transformer models and evaluate the explainability of a deep learning-based annotation of coded clinical documents derived from death certificates. Specifically, the focus is on interpreting and explaining model behavior and predictions by leveraging calibrated confidence, saliency maps, and measures of instance difficulty applied to textualized representations coded using the International Statistical Classification of Diseases and Related Health Problems (ICD). In particular, the instance difficulty approach has previously proven effective in interpreting image-based models. METHODS: We used disease language bidirectional encoder representations from transformers, a domain-specific bidirectional encoder representations from transformers model pretrained on ICD classification-related data, to analyze reverse-coded representations of death certificates from the US National Center for Health Statistics, covering the years 2014 to 2017 and comprising 12,919,268 records. The model inputs consist of textualized representations of ICD-coded fields derived from death certificates, obtained by mapping codes to the corresponding ICD concept titles. For this study, we extracted a subset of 400,000 certificates for training, 100,000 for testing, and 10,000 for validation. We assessed the model's calibration and applied a temperature scaling post-hoc calibration method to improve the reliability of its confidence scores. Additionally, we introduced mechanisms to rank instances by difficulty using Variance of Gradients scores, which also facilitate the detection of out-of-distribution cases. Saliency maps were also used to enhance interpretability by highlighting which tokens in the input text most influenced the model's predictions. RESULTS: Experimental results on a pre-fine-tuned model for predicting the underlying cause of death from reverse-coded death certificate representations, which already achieves high accuracy (0.990), show good out-of-the-box calibration with respect to expected calibration error (1.40), though less so for maximum calibration error (30.91). Temperature scaling further reduces expected calibration error (1.13) while significantly increasing maximum calibration error (42.17). We report detailed Variance of Gradients analyses at the ICD category and chapter levels, including distributions of target and input categories, and provide word-level attributions using Integrated Gradients for both correctly classified and failure cases. CONCLUSIONS: This study demonstrates that enhancing interpretability and explainability in deep learning models can improve their practical utility in clinical document annotation. By addressing reliability and transparency, the proposed approaches support more informed and trustworthy application of machine learning in mission-critical medical settings. The results also highlight the ongoing need to address data limitations and ensure robust performance, especially for rare or complex cases.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。