ALDP-FL for adaptive local differential privacy in federated learning

ALDP-FL 用于联邦学习中的自适应局部差分隐私

阅读:1

Abstract

Federated learning, as an emerging distributed learning framework, enables model training without compromising user data privacy. However, malicious attackers may still infer sensitive user information by analyzing model updates during the federated learning process. To address this, this paper proposes an Adaptive Localized Differential Privacy Federated Learning (ALDP-FL) method. This approach dynamically sets the clipping threshold for each network layer's updates based on the historical moving average of their [Formula: see text]-norm, thereby injecting adaptive noise into each layer. Additionally, a bounded perturbation mechanism is designed to minimize the impact of the added noise on model accuracy. A privacy analysis of the method is provided. Finally, experiments on the MNIST, Fashion MNIST, and CIFAR-10 datasets demonstrate the effectiveness and practicality of the proposed method. Specifically, ALDP-FL achieves an average improvement of over 10% across all evaluation metrics: Accuracy increases by 10.57%, Precision by 10.64%, Recall by 10.52%, and F1 Score by 10.64%. Regarding the reconstructed images under the iDLG attack, the average improvement rates in MSE and SSIM reach 391.2% and -85.4%, respectively, significantly outperforming all other comparison methods.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。