Abstract
Federated Learning (FL) enables collaborative model training across institutions without sharing raw data, making it valuable for privacy-sensitive domains like healthcare. However, FL performance deteriorates significantly when client datasets are non-IID. While dataset similarity metrics could guide collaboration decisions, existing approaches have critical limitations: unbounded costs that lack interpretability across domains, requirements for direct data access that violate FL's privacy constraints, and poor sample efficiency. We propose a novel metric that extracts model representations after a single federated training round to predict whether collaboration will improve performance. Our approach formulates similarity assessment as an optimal transport problem with a hybrid cost function that captures both feature-level differences and label distribution divergence between clients. We ensure privacy through a careful composition of Secure Multiparty Computation (SMC) and Differential Privacy (DP) mechanisms. Our theoretical analysis establishes a formal connection between the proposed metric and weight divergence in federated training, explaining why early-round activations can predict long-term collaboration outcomes. Empirically, the metric remains tightly correlated with weight divergence throughout training, reinforcing the validity of our single-round probe. Extensive experiments on synthetic benchmarks and real-world medical imaging tasks demonstrate that our metric reliably identifies beneficial collaborations providing practitioners with an actionable tool for participant selection in cross-silo FL.