Federated Learning Under Evolving Distribution Shifts

分布式转变下的联邦学习

阅读:2

Abstract

Federated learning (FL) is a distributed learning paradigm that facilitates training a global machine-learning model without collecting the raw data from distributed clients. Recent advances in FL have addressed several considerations that are likely to transpire in realistic settings, such as data distribution heterogeneity among clients. However, most of the existing works still consider clients' data distributions to be static or conforming to a simple dynamic, e.g., in participation rates of clients. In real FL applications, client data distributions change over time, and the dynamics, i.e., the evolving pattern, can be highly non-trivial. Furthermore, evolution may take place from training to testing. In this paper, we address dynamics in client data distributions and aim to train FL systems from time-evolving clients that can generalize to future target data. Specifically, we propose two algorithms, FedEvolve and FedEvp, which are able to capture the evolving patterns of the clients during training and are test-robust under evolving distribution shifts. FedEvolve explicitly models the temporal evolution by learning two distinct representation mappings that capture the transition between consecutive data domains for each client. In addition, FedEvp learns a single, evolving-domain-invariant representation by aligning current data with prototypes that are continuously updated from all previously seen domains. Through extensive experiments on both synthetic and real data, we show the proposed algorithms can significantly outperform the FL baselines across various network architectures.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。