Splitting smarter: Differential privacy for secure healthcare federated learning

更智能的分割:用于安全医疗保健联邦学习的差分隐私

阅读:1

Abstract

Split Federated Learning (SplitFed) has emerged as a decentralized method of training ML models that enables multiple healthcare parties to collaboratively share models without sharing their raw data. This method, however, is vulnerable to label inference attacks, which can compromise patient privacy. Previous research efforts have attempted to address the question. However, these works do not conduct a detailed vulnerability analysis of SplitFed against label inference attacks. Additionally, some of these efforts propose differential privacy (DP) as a solution; the works focus on distributed learning paradigms where labels used for training the model are available to the clients, which is not a practical assumption. To address this, in this paper, we investigate the vulnerability of SplitFed models to label inference attacks in biomedical imaging. We propose a solution that incorporates DP into SplitFed to protect against label inference attacks. Additionally, we also provide a detailed vulnerability analysis of SplitFed against label inference attacks specific to healthcare applications. Finally, we propose a DP-based method for mitigating label inference attacks against Split-Fed models. Results indicate the efficacy of the SplitFed model under multiple conditions and found that the label inference accuracy changes from [Formula: see text] (No-DP) to [Formula: see text] (with DP). This indicates that integration of DP offers a robust mechanism for protecting patient privacy. Additionally, the usage of Cauchy noise in DP provides the best protection out of all noise categories, with a label inference accuracy of 0% while Exponential noise was the worst, resulting in a label inference accuracy of 68%.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。