Hyperparameter Optimization EM Algorithm via Bayesian Optimization and Relative Entropy

基于贝叶斯优化和相对熵的EM算法超参数优化

阅读:1

Abstract

Hyperparameter optimization (HPO), which is also called hyperparameter tuning, is a vital component of developing machine learning models. These parameters, which regulate the behavior of the machine learning algorithm and cannot be directly learned from the given training data, can significantly affect the performance of the model. In the context of relevance vector machine hyperparameter optimization, we have used zero-mean Gaussian weight priors to derive iterative equations through evidence function maximization. For a general Gaussian weight prior and Bayesian linear regression, we similarly derive iterative reestimation equations for hyperparameters through evidence function maximization. Subsequently, after using relative entropy and Bayesian optimization, the aforementioned non-closed-form reestimation equations can be partitioned into E and M steps, providing a clear mathematical and statistical explanation for the iterative reestimation equations of hyperparameters. The experimental result shows the effectiveness of the EM algorithm of hyperparameter optimization, and the algorithm also has the merit of fast convergence, except that the covariance of the posterior distribution is a singular matrix, which affects the increase in the likelihood.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。