UbiQTree: Uncertainty quantification in XAI with tree ensembles

UbiQTree:基于树集成的XAI不确定性量化

阅读:3

Abstract

Explainable artificial intelligence (XAI) techniques, particularly Shapley additive explanations (SHAP), are essential for interpreting ensemble tree-based models in critical areas such as healthcare. However, SHAP values are often treated as point estimates that neglect uncertainty originating from aleatoric (irreducible noise) and epistemic (lack of data) sources. This work introduces an approach that decomposes SHAP value uncertainty into aleatoric, epistemic, and entanglement components. This approach employs Dempster-Shafer evidence theory and Dirichlet process (DP) hypothesis sampling over tree ensembles. The use-case validation reveals insights into epistemic uncertainty within SHAP explanations, enhancing the reliability and interpretability of SHAP attributions. This informs robust decision-making and model refinement. Our findings suggest that reducing epistemic uncertainty requires improved data quality and model development techniques. Tree-based models, particularly bagging, are effective in quantifying such uncertainties.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。