Abstract
Federated Learning (FL) enables collaborative model training without centralized data sharing; however, its efficiency often degrades under system and statistical heterogeneity across clients. Increasing the number of local epochs per round can enhance efficiency by enabling the global model to reach target accuracy in fewer communication rounds. Yet, excessive local training may cause client models to diverge from the global model, slowing convergence. To examine this trade-off, we conduct an empirical divergence analysis and show that consistent sufficient local updates across rounds can reduce the mean divergence between local and global models, thereby promoting faster and more stable convergence. Building on this insight, we propose a novel, efficient federated learning algorithm (FedEff) that assigns optimal local epochs to each client in heterogeneous settings. FedEff incorporates a server-side epoch selection mechanism, where the server selects an optimal number of epochs for each client, by considering the computation and communication speeds of all clients. The server uses an Estimated Round Time (ERT) to calculate the optimal number of local epochs for each client. Extensive simulations under heterogeneous computation and communication conditions confirm that the proposed approach achieves notable reductions in client waiting times and overall training duration within the considered simulation framework. Comparative results show that our method achieves better training efficiency than FedAvg and random epoch selection strategies, thereby establishing its effectiveness in improving federated learning performance under heterogeneous settings.