A polynomial proxy model approach to verifiable decentralized federated learning

一种用于可验证去中心化联邦学习的多项式代理模型方法

阅读:2

Abstract

Decentralized Federated Learning improves data privacy and eliminates single points of failure by removing reliance on centralized storage and model aggregation in distributed computing systems. Ensuring the integrity of computations during local model training is a significant challenge, especially before sharing gradient updates from each local client. Current methods for ensuring computation integrity often involve patching local models to implement cryptographic techniques, such as Zero-Knowledge Proofs. However, this approach becomes highly complex and sometimes impractical for large-scale models that use techniques such as random dropouts to improve training convergence. These random dropouts create non-deterministic behavior, making it challenging to verify model updates under deterministic protocols. We propose ProxyZKP, a novel framework combining Zero-Knowledge Proofs with polynomial proxy models to provide computation integrity in local training to address this issue. Each local node combines a private model for online deep learning applications and a proxy model that mediates decentralized model training by exchanging gradient updates. The multivariate polynomial nature of proxy models facilitates the application of Zero-Knowledge Proofs. These proofs verify the computation integrity of updates from each node without disclosing private data. Experimental results indicate that ProxyZKP significantly reduces computational load. Specifically, ProxyZKP achieves proof generation times that are 30-50% faster compared to established methods like zk-SNARKs and Bulletproofs. This improvement is largely due to the high parallelization potential of the univariate polynomial decomposition approach. Additionally, integrating Differential Privacy into the ProxyZKP framework reduces the risk of Gradient Inversion attacks by adding calibrated noise to the gradients, while maintaining competitive model accuracy. The results demonstrate that ProxyZKP is a scalable and efficient solution for ensuring training integrity in decentralized federated learning environments, particularly in scenarios with frequent model updates and the need for strong model scalability.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。