Abstract
Federated learning (FL) is a distributed learning paradigm that facilitates training a global machine-learning model without collecting the raw data from distributed clients. Recent advances in FL have addressed several considerations that are likely to transpire in realistic settings, such as data distribution heterogeneity among clients. However, most of the existing works still consider clients' data distributions to be static or conforming to a simple dynamic, e.g., in participation rates of clients. In real FL applications, client data distributions change over time, and the dynamics, i.e., the evolving pattern, can be highly non-trivial. Furthermore, evolution may take place from training to testing. In this paper, we address dynamics in client data distributions and aim to train FL systems from time-evolving clients that can generalize to future target data. Specifically, we propose two algorithms, FedEvolve and FedEvp, which are able to capture the evolving patterns of the clients during training and are test-robust under evolving distribution shifts. FedEvolve explicitly models the temporal evolution by learning two distinct representation mappings that capture the transition between consecutive data domains for each client. In addition, FedEvp learns a single, evolving-domain-invariant representation by aligning current data with prototypes that are continuously updated from all previously seen domains. Through extensive experiments on both synthetic and real data, we show the proposed algorithms can significantly outperform the FL baselines across various network architectures.