Explainable machine learning by SEE-Net: closing the gap between interpretable models and DNNs

SEE-Net 的可解释机器学习:弥合可解释模型和深度神经网络之间的差距

阅读:1

Abstract

Deep Neural Networks (DNNs) have achieved remarkable accuracy for numerous applications, yet their complexity often renders the explanation of predictions a challenging task. This complexity contrasts with easily interpretable statistical models, which, however, often suffer from lower accuracy. Our work suggests that this underperformance may stem more from inadequate training methods than from the inherent limitations of model structures. We hereby introduce the Synced Explanation-Enhanced Neural Network (SEE-Net), a novel architecture integrating a guiding DNN with a shallow neural network, functionally equivalent to a two-layer mixture of linear models. This shallow network is trained under the guidance of the DNN, effectively bridging the gap between the prediction power of deep learning and the need for explainable models. Experiments on image and tabular data demonstrate that SEE-Net can leverage the advantage of DNNs while providing an interpretable prediction framework. Critically, SEE-Net embodies a new paradigm in machine learning: it achieves high-level explainability with minimal compromise on prediction accuracy by training an almost "white-box" model under the co-supervision of a "black-box" model, which can be tailored for diverse applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。