Rates of convergence for regression with the graph poly-Laplacian

图多拉普拉斯回归的收敛速度

阅读:1

Abstract

In the (special) smoothing spline problem one considers a variational problem with a quadratic data fidelity penalty and Laplacian regularization. Higher order regularity can be obtained via replacing the Laplacian regulariser with a poly-Laplacian regulariser. The methodology is readily adapted to graphs and here we consider graph poly-Laplacian regularization in a fully supervised, non-parametric, noise corrupted, regression problem. In particular, given a dataset {xi}i=1n and a set of noisy labels {yi}i=1n ⊂ R we let un:{xi}i=1n → R be the minimizer of an energy which consists of a data fidelity term and an appropriately scaled graph poly-Laplacian term. When yi = g(xi) + ξi, for iid noise ξi, and using the geometric random graph, we identify (with high probability) the rate of convergence of un to g in the large data limit n → ∞. Furthermore, our rate is close to the known rate of convergence in the usual smoothing spline model.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。