AutoXAI: a meta-learning approach for recommendation of explanation techniques

AutoXAI:一种用于解释技术推荐的元学习方法

阅读:1

Abstract

The absence of a universally optimal global explanation technique for machine learning models presents a significant challenge in ensuring interpretability across diverse tasks and domains. Existing methods vary in their strengths and limitations, and selecting the most suitable technique often requires manual trial-and-error, which is inefficient and prone to bias. This study introduces AutoXAI, a meta-learning framework designed to automate the recommendation of global explanation techniques for supervised learning tasks on tabular data. The framework aims to optimize interpretability by aligning recommendations with user-defined quantitative metrics. AutoXAI leverages optimal transport tos identify datasets with similar underlying distributions and applies multi-objective optimization to select explanation methods that best satisfy the chosen metrics. The framework currently supports four widely adopted model-agnostic techniques: LIME, Anchor, RuleFit, and RuleMatrix. AutoXAI was evaluated across 21 benchmark datasets, where its recommendations matched the best-performing explanation techniques in 19 cases. Comparative analysis against static heuristics demonstrated superior performance. Robustness experiments showed that AutoXAI maintained consistent recommendations under noise perturbations in over 90% of scenarios, confirming its reliability. AutoXAI offers a scalable, data-driven solution for selecting explanation techniques, reducing manual effort and enhancing trust in model decisions. Its adaptability to user preferences and resilience to noise make it a promising tool for real-world deployment, including critical domains such as healthcare and intrusion detection systems.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。