Interpretation of Machine Learning Models for Data Sets with Many Features Using Feature Importance

利用特征重要性解释具有众多特征的数据集的机器学习模型

阅读:1

Abstract

Feature importance (FI) is used to interpret the machine learning model y = f(x) constructed between the explanatory variables or features, x, and the objective variables, y. For a large number of features, interpreting the model in the order of increasing FI is inefficient when there are similarly important features. Therefore, in this study, a method is developed to interpret models by considering the similarities between the features in addition to the FI. The cross-validated permutation feature importance (CVPFI), which can be calculated using any machine learning method and can handle multicollinearity problems, is used as the FI, while the absolute correlation and maximal information coefficients are used as metrics of feature similarity. Machine learning models could be effectively interpreted by considering the features from the Pareto fronts, where CVPFI is large and the feature similarity is small. Analyses of actual molecular and material data sets confirm that the proposed method enables the accurate interpretation of machine learning models.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。