The Inexact Gradient Method with Memory (IGMM) is able to considerably outperform the Gradient Method by employing a piece-wise linear lower model on the smooth part of the objective. However, the auxiliary problem can only be solved within a fixed tolerance at every iteration. The need to contain the inexactness narrows the range of problems to which IGMM can be applied and degrades the worst-case convergence rate. In this work, we show how a simple modification of IGMM removes the tolerance parameter from the analysis. The resulting Exact Gradient Method with Memory (EGMM) is as broadly applicable as the Bregman Distance Gradient Method/NoLips and has the same worst-case rate of O(1/k) , the best for its class. Under necessarily stricter assumptions, we can accelerate EGMM without error accumulation yielding an Accelerated Gradient Method with Memory (AGMM) possessing a worst-case rate of O(1/k2) . In our preliminary computational experiments EGMM displays excellent performance, sometimes surpassing accelerated methods. When the model discards old information, AGMM also consistently exceeds the Fast Gradient Method.
Exact gradient methods with memory.
阅读:11
作者:Florea, Mihai, I
| 期刊: | Optimization Methods & Software | 影响因子: | 1.400 |
| 时间: | 2022 | 起止号: | 2022 Jul 20; 37(6):2310-2337 |
| doi: | 10.1080/10556788.2022.2091559 | ||
特别声明
1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。
2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。
3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。
4、投稿及合作请联系:info@biocloudy.com。
