Combining parameter fragmentation and group shuffling to defend against the untrustworthy server in federated learning

结合参数碎片化和分组洗牌来防御联邦学习中不可信服务器的攻击

阅读:1

Abstract

Federated Learning (FL) enables multiple clients to cooperatively train models without sharing local data, where clients send trained local models to a server for aggregation. Although FL improves data privacy by keeping each client's training data local, the security of federated learning can be compromised by untrusted servers. Specifically, untrusted servers may infer clients' private identities and data information from the provided parameters and incorrectly execute the aggregation protocol to falsify the aggregation results. Therefore, to ensure the security and model accuracy of the federated learning scheme, we must protect the privacy of clients' information and mitigate the impact of untrusted servers. To defend against the possibility that an attacker may infer the client's privacy information from the partial model parameters provided by the client, we propose a partial model parameter security defense scheme based on group shuffling of parameter fragments. The scheme performs fragmentation operations on parameters and performs differential privacy perturbation on parameter fragments, so that the attacker can only access partial model parameter information and cannot obtain the complete model parameters. And in order to prevent the attacker from being able to reverse-engineer the client's identity information from the shared parameters, we also propose a group shuffling model to disrupt the order of the perturbed parameter fragments. Experimental results show that the average test accuracy of this scheme outperforms the FL-CDP, FL-LDP and AdaComp schemes by about 1.30%, 0.09% and 0.03%, respectively, thus ensuring the global training accuracy of the model while fending off attacks on local model parameters.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。