Abstract
Hierarchical asynchronous federated learning (HAFL) accommodates more real networking and ensures practical communications and efficient aggregations. However, existing HAFL schemes still face challenges in balancing privacy-preserving and robustness. Malicious training nodes may infer the privacy of other training nodes or poison the global model, thereby damaging the system's robustness. To address these issues, we propose a secure hierarchical asynchronous federated learning (SHAFL) framework. SHAFL organizes training nodes into multiple groups based on their respective gateways. Within each group, the training nodes prevent inference attacks from the gateways and committee nodes via a mask-DP exchange protocol and employ homomorphic encryption (HE) to prevent collusion attacks from other training nodes. Compared with conventional solutions, SHAFL uses noise that can be eliminated to reduce the impact of noise on the global model's performance, while employing a shuffle model and subsampling to enhance the local model's privacy-preserving level. At global model aggregation, SHAFL considers both model accuracy and communication delay, effectively reducing the impact of malicious and stale models on system performance. Theoretical analysis and experimental evaluations demonstrate that SHAFL outperforms state-of-the-art solutions in terms of convergence, security, robustness, and privacy-preserving capabilities.