Abstract
The generalization performance of deep neural networks (DNNs) is a critical factor in achieving robust model behavior on unseen data. Recent studies have highlighted the importance of sharpness-based measures in promoting generalization by encouraging convergence to flatter minima. Among these approaches, Sharpness-Aware Minimization (SAM) has emerged as an effective optimization technique for reducing the sharpness of the loss landscape, thereby improving generalization. However, SAM's computational overhead and sensitivity to noisy gradients limit its scalability and efficiency. To address these challenges, we propose Gradient-Centralized Sharpness-Aware Minimization (GCSAM), which incorporates Gradient Centralization (GC) to stabilize gradients and accelerate convergence. GCSAM normalizes gradients before the ascent step, reducing noise and variance, and improving stability during training. Our evaluations on both general vision benchmarks (CIFAR-10, CIFAR-100) and critical medical imaging datasets (breast ultrasound and COVID-19 chest X-rays) demonstrate that GCSAM consistently outperforms SAM and the Adam optimizer in terms of generalization and computational efficiency. These results highlight GCSAM's potential for improving reliability in domains where robust generalization is essential, particularly in medical image analysis. Our code is available at https://github.com/mhassann22/GCSAM.