GCSAM: Gradient Centralized Sharpness Aware Minimization

GCSAM:梯度集中式锐度感知最小化

阅读:2

Abstract

The generalization performance of deep neural networks (DNNs) is a critical factor in achieving robust model behavior on unseen data. Recent studies have highlighted the importance of sharpness-based measures in promoting generalization by encouraging convergence to flatter minima. Among these approaches, Sharpness-Aware Minimization (SAM) has emerged as an effective optimization technique for reducing the sharpness of the loss landscape, thereby improving generalization. However, SAM's computational overhead and sensitivity to noisy gradients limit its scalability and efficiency. To address these challenges, we propose Gradient-Centralized Sharpness-Aware Minimization (GCSAM), which incorporates Gradient Centralization (GC) to stabilize gradients and accelerate convergence. GCSAM normalizes gradients before the ascent step, reducing noise and variance, and improving stability during training. Our evaluations on both general vision benchmarks (CIFAR-10, CIFAR-100) and critical medical imaging datasets (breast ultrasound and COVID-19 chest X-rays) demonstrate that GCSAM consistently outperforms SAM and the Adam optimizer in terms of generalization and computational efficiency. These results highlight GCSAM's potential for improving reliability in domains where robust generalization is essential, particularly in medical image analysis. Our code is available at https://github.com/mhassann22/GCSAM.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。