A refined lion optimizer for deep learning

一种改进的深度学习狮子优化器

阅读:1

Abstract

Optimization algorithms play a fundamental role in training neural networks. The optimizer focuses on the updating weights of momentum and velocity on learning rates and losses, furthermore the complexity of the optimizer and the quantity of updated parameters are considered. The Lion optimizer, proposed by Google, is an excellent optimizer known for its faster training speed and more efficient memory usage. However, due to the discreteness of the sign function, the optimizer's parameter updates may fail to adapt dynamically with momentum in some models, leading to non-convergence issues. In this paper, a Refined Lion Optimizer(RLion) introduce a novel update rule for the optimizer by leveraging a non-linear continuous bounded function that maps the product of momentum and a scaling factor. This design enables the optimization parameters to adaptively adjust according to both the magnitude of momentum and the scaling factor. Theoretical analysis demonstrates that the RLion is able to smooth out the fluctuations, converge faster and more reliable. The FasterNet, EfficientNetV2 and the YOLOV8 on ImageNet1k dataset are trained without warm up for classification by leveraging the RLion optimizer. The YOLOV8, YOLOV11 on VOC2012 data and Object detection with Vision Transformers on Caltech 101 dataset are trained for Object detection. The DeepLabV3+ on instance-level human parsing dataset, TwinLiteNet on BDD100K, UNet on part of the CARLA self-driving dataset are trained for semantic segmentation. Compared to the AdamW and Lion optimizer, the loss and accuracy present the RLion can promote the classification validation accuracy about [Formula: see text] higher than AdamW on many models even the learning rate is as high as AdamW. For object detection and semantic segmentation, RLion achieves or approaches the performance of the adamw, but overcomes the Lion gradient's explosive or disappear susceptibility. The RLion has better convergence performance and versatility.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。