Abstract
Federated Learning (FL) enables multiple clients to cooperatively train models without sharing local data, where clients send trained local models to a server for aggregation. Although FL improves data privacy by keeping each client's training data local, the security of federated learning can be compromised by untrusted servers. Specifically, untrusted servers may infer clients' private identities and data information from the provided parameters and incorrectly execute the aggregation protocol to falsify the aggregation results. Therefore, to ensure the security and model accuracy of the federated learning scheme, we must protect the privacy of clients' information and mitigate the impact of untrusted servers. To defend against the possibility that an attacker may infer the client's privacy information from the partial model parameters provided by the client, we propose a partial model parameter security defense scheme based on group shuffling of parameter fragments. The scheme performs fragmentation operations on parameters and performs differential privacy perturbation on parameter fragments, so that the attacker can only access partial model parameter information and cannot obtain the complete model parameters. And in order to prevent the attacker from being able to reverse-engineer the client's identity information from the shared parameters, we also propose a group shuffling model to disrupt the order of the perturbed parameter fragments. Experimental results show that the average test accuracy of this scheme outperforms the FL-CDP, FL-LDP and AdaComp schemes by about 1.30%, 0.09% and 0.03%, respectively, thus ensuring the global training accuracy of the model while fending off attacks on local model parameters.