Secure Hierarchical Asynchronous Federated Learning with Shuffle Model and Mask-DP

基于Shuffle模型和Mask-DP的安全分层异步联邦学习

阅读:2

Abstract

Hierarchical asynchronous federated learning (HAFL) accommodates more real networking and ensures practical communications and efficient aggregations. However, existing HAFL schemes still face challenges in balancing privacy-preserving and robustness. Malicious training nodes may infer the privacy of other training nodes or poison the global model, thereby damaging the system's robustness. To address these issues, we propose a secure hierarchical asynchronous federated learning (SHAFL) framework. SHAFL organizes training nodes into multiple groups based on their respective gateways. Within each group, the training nodes prevent inference attacks from the gateways and committee nodes via a mask-DP exchange protocol and employ homomorphic encryption (HE) to prevent collusion attacks from other training nodes. Compared with conventional solutions, SHAFL uses noise that can be eliminated to reduce the impact of noise on the global model's performance, while employing a shuffle model and subsampling to enhance the local model's privacy-preserving level. At global model aggregation, SHAFL considers both model accuracy and communication delay, effectively reducing the impact of malicious and stale models on system performance. Theoretical analysis and experimental evaluations demonstrate that SHAFL outperforms state-of-the-art solutions in terms of convergence, security, robustness, and privacy-preserving capabilities.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。