Joint explainable and fair AI in healthcare

医疗保健领域联合可解释和公平的人工智能

阅读:2

Abstract

The nature of decisions in the healthcare domain necessitates accurate, interpretable, and reliable AI solutions. Explanation Guided Learning (EGL) explores the integration of explanation annotations into learning models to align human and model explanations. In this paper, we propose Explanation Constraints Guided Learning (ECGL), a novel approach inspired by the augmented Lagrangian method that integrates domain-specific explanation constraints directly into model training. The goal is to enhance both predictive accuracy and interpretability, making machine learning models more trustworthy. Experimental results on both tabular and image datasets demonstrate that ECGL maintains high accuracy while incorporating fairness and interpretability constraints. Specifically, ECGL improves predictive accuracy on the diabetes dataset compared to the base model and enhances feature alignment, with SHAP analysis. On average, a 36.8% increase in SHAP importance demonstrates that ECGL effectively aligns model explanations with domain knowledge. Furthermore, ECGL improves the identification of clinically significant regions in pneumonia X-ray images, as validated by both improved Equalized Odds Ratio (EOR) and GradCAM visualizations. ECGL achieves a 13% improvement in the EOR fairness metric, indicating better consistency of predictive performance across different groups. These results confirm that ECGL successfully balances performance, fairness, and interpretability, positioning it as a promising approach for trustworthy healthcare AI applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。