A dual enhanced stochastic gradient descent method with dynamic momentum and step size adaptation for improved optimization performance

一种具有动态动量和步长自适应的双重增强型随机梯度下降法,以提高优化性能

阅读:1

Abstract

In modern machine learning, optimization algorithms are crucial; they steer the training process by skillfully navigating through complex, high-dimensional loss landscapes. Among these, stochastic gradient descent with momentum (SGDM) is widely adopted for its ability to accelerate convergence in shallow regions. However, SGDM struggles in challenging optimization landscapes, where narrow, curved valleys can lead to oscillations and slow progress. This paper introduces dual enhanced SGD (DESGD), which addresses these limitations by dynamically adapting both momentum and step size on the same update rules of SGDM. In two optimization test functions, the Rosenbrock and Sum Square functions, the suggested optimizer typically performs better than SGDM and Adam. For example, it accomplishes comparable errors while achieving up to 81–95% fewer iterations and 66–91% less CPU time than SGDM and 67–78% fewer iterations with 62–70% quicker runtimes than Adam. On the MNIST dataset, the proposed optimizer achieved the highest accuracies and lowest test losses across the majority of batch sizes. Compared to SGDM, they consistently improved accuracy by about 1–2%, while performing on par with or slightly better than Adam in accuracy and error. Although SGDM remained the fastest per-step optimizer, our method’s computational cost is aligned with that of other adaptive optimizers like Adam. This marginal increase in per-iteration overhead is decisively justified by the substantial gains in model accuracy and reduction in training loss, demonstrating a favorable cost-to-performance ratio. The results demonstrate that DESGD is a promising practical optimizer to handle scenarios demanding stability in challenging landscapes.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。