Abstract
Split Federated Learning (SplitFed) has emerged as a decentralized method of training ML models that enables multiple healthcare parties to collaboratively share models without sharing their raw data. This method, however, is vulnerable to label inference attacks, which can compromise patient privacy. Previous research efforts have attempted to address the question. However, these works do not conduct a detailed vulnerability analysis of SplitFed against label inference attacks. Additionally, some of these efforts propose differential privacy (DP) as a solution; the works focus on distributed learning paradigms where labels used for training the model are available to the clients, which is not a practical assumption. To address this, in this paper, we investigate the vulnerability of SplitFed models to label inference attacks in biomedical imaging. We propose a solution that incorporates DP into SplitFed to protect against label inference attacks. Additionally, we also provide a detailed vulnerability analysis of SplitFed against label inference attacks specific to healthcare applications. Finally, we propose a DP-based method for mitigating label inference attacks against Split-Fed models. Results indicate the efficacy of the SplitFed model under multiple conditions and found that the label inference accuracy changes from [Formula: see text] (No-DP) to [Formula: see text] (with DP). This indicates that integration of DP offers a robust mechanism for protecting patient privacy. Additionally, the usage of Cauchy noise in DP provides the best protection out of all noise categories, with a label inference accuracy of 0% while Exponential noise was the worst, resulting in a label inference accuracy of 68%.