The Inexact Gradient Method with Memory (IGMM) is able to considerably outperform the Gradient Method by employing a piece-wise linear lower model on the smooth part of the objective. However, the auxiliary problem can only be solved within a fixed tolerance at every iteration. The need to contain the inexactness narrows the range of problems to which IGMM can be applied and degrades the worst-case convergence rate. In this work, we show how a simple modification of IGMM removes the tolerance parameter from the analysis. The resulting Exact Gradient Method with Memory (EGMM) is as broadly applicable as the Bregman Distance Gradient Method/NoLips and has the same worst-case rate of O(1/k) , the best for its class. Under necessarily stricter assumptions, we can accelerate EGMM without error accumulation yielding an Accelerated Gradient Method with Memory (AGMM) possessing a worst-case rate of O(1/k2) . In our preliminary computational experiments EGMM displays excellent performance, sometimes surpassing accelerated methods. When the model discards old information, AGMM also consistently exceeds the Fast Gradient Method.
Exact gradient methods with memory.
阅读:4
作者:Florea, Mihai, I
| 期刊: | Optimization Methods & Software | 影响因子: | 1.400 |
| 时间: | 2022 | 起止号: | 2022 Jul 20; 37(6):2310-2337 |
| doi: | 10.1080/10556788.2022.2091559 | ||
特别声明
1、本文转载旨在传播信息,不代表本网站观点,亦不对其内容的真实性承担责任。
2、其他媒体、网站或个人若从本网站转载使用,必须保留本网站注明的“来源”,并自行承担包括版权在内的相关法律责任。
3、如作者不希望本文被转载,或需洽谈转载稿费等事宜,请及时与本网站联系。
4、此外,如需投稿,也可通过邮箱info@biocloudy.com与我们取得联系。
