FedEff: efficient federated learning with optimal local epochs for heterogeneous clients

FedEff:针对异构客户端的高效联邦学习,具有最优的局部迭代次数

阅读:1

Abstract

Federated Learning (FL) enables collaborative model training without centralized data sharing; however, its efficiency often degrades under system and statistical heterogeneity across clients. Increasing the number of local epochs per round can enhance efficiency by enabling the global model to reach target accuracy in fewer communication rounds. Yet, excessive local training may cause client models to diverge from the global model, slowing convergence. To examine this trade-off, we conduct an empirical divergence analysis and show that consistent sufficient local updates across rounds can reduce the mean divergence between local and global models, thereby promoting faster and more stable convergence. Building on this insight, we propose a novel, efficient federated learning algorithm (FedEff) that assigns optimal local epochs to each client in heterogeneous settings. FedEff incorporates a server-side epoch selection mechanism, where the server selects an optimal number of epochs for each client, by considering the computation and communication speeds of all clients. The server uses an Estimated Round Time (ERT) to calculate the optimal number of local epochs for each client. Extensive simulations under heterogeneous computation and communication conditions confirm that the proposed approach achieves notable reductions in client waiting times and overall training duration within the considered simulation framework. Comparative results show that our method achieves better training efficiency than FedAvg and random epoch selection strategies, thereby establishing its effectiveness in improving federated learning performance under heterogeneous settings.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。