On the Generalization Ability of Online Gradient Descent Algorithm Under the Quadratic Growth Condition

在线梯度下降算法在二次增长条件下的泛化能力

阅读:1

Abstract

Online learning has been successfully applied in various machine learning problems. Conventional analysis of online learning achieves a sharp generalization bound with a strongly convex assumption. In this paper, we study the generalization ability of the classic online gradient descent algorithm under the quadratic growth condition (QGC), a strictly weaker condition than strong convexity. Under some mild assumptions, we prove that the excess risk converges no worse than $O(\log T/T)$ when the data are independently and identically distributed (i.i.d.). When the data are generated from a $\phi $ -mixing process, we achieve the excess risk bound $O(\log T /T+\phi (\tau))$ , where $\phi (\tau)$ is the mixing coefficient capturing the non-i.i.d. attribute. Our key technique is based on the combination of the QGC and the martingale concentrations. Our results indicate that the strong convexity is not necessary to achieve the sharp $O(\log {T}/T)$ convergence rate in online learning. We verify our theories on both synthetic and real-world data.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。