Abstract
The absence of a universally optimal global explanation technique for machine learning models presents a significant challenge in ensuring interpretability across diverse tasks and domains. Existing methods vary in their strengths and limitations, and selecting the most suitable technique often requires manual trial-and-error, which is inefficient and prone to bias. This study introduces AutoXAI, a meta-learning framework designed to automate the recommendation of global explanation techniques for supervised learning tasks on tabular data. The framework aims to optimize interpretability by aligning recommendations with user-defined quantitative metrics. AutoXAI leverages optimal transport tos identify datasets with similar underlying distributions and applies multi-objective optimization to select explanation methods that best satisfy the chosen metrics. The framework currently supports four widely adopted model-agnostic techniques: LIME, Anchor, RuleFit, and RuleMatrix. AutoXAI was evaluated across 21 benchmark datasets, where its recommendations matched the best-performing explanation techniques in 19 cases. Comparative analysis against static heuristics demonstrated superior performance. Robustness experiments showed that AutoXAI maintained consistent recommendations under noise perturbations in over 90% of scenarios, confirming its reliability. AutoXAI offers a scalable, data-driven solution for selecting explanation techniques, reducing manual effort and enhancing trust in model decisions. Its adaptability to user preferences and resilience to noise make it a promising tool for real-world deployment, including critical domains such as healthcare and intrusion detection systems.