Abstract
Federated learning (FL) is an effective distributed learning paradigm for protecting client privacy, enabling multiple clients to collaboratively train a global model without uploading private data. It has promising applications in sports image classification. However, FL faces the issue of non-independent and identically distributed (non-IID) data, which leads to excessive variance between local models and hinders the convergence of the global model. Although FedSAM and its variants attempt to reduce this variance by finding smooth solutions between local models, local smoothing does not necessarily result in global smoothing. We refer to this issue as the smoothness inconsistency problem. To address this challenge, we propose a novel FL paradigm, named A-FedSAM, which utilizes adaptive local distillation to achieve consistency in smoothing between local and global models without incurring additional communication overhead, thereby improving the convergence accuracy of the global model. Specifically, A-FedSAM employs the global model as the teacher during local training, dynamically guiding the local models to ensure that their gradients not only maintain smoothness but also align with the global objective. Extensive experiments on sports image classification tasks demonstrate that A-FedSAM outperforms state-of-the-art methods in terms of accuracy across different data heterogeneities and client sampling rates, while requiring fewer communication and computational resources to achieve the same target accuracy.