Practical Implementation of Federated Learning for Detecting Backdoor Attacks in a Next-word Prediction Model

联邦学习在下一词预测模型中检测后门攻击的实际应用

阅读:1

Abstract

This article details the development of a next-word prediction model utilizing federated learning and introduces a mechanism for detecting backdoor attacks. Federated learning enables multiple devices to collaboratively train a shared model while retaining data locally. However, this decentralized approach is susceptible to manipulation by malicious actors who control a subset of participating devices, thereby biasing the model's outputs on specific topics, such as a presidential election. The proposed detection mechanism aims to identify and exclude devices with anomalous datasets from the training process, thereby mitigating the influence of such attacks. By using the example of a presidential election, the study demonstrates a positive correlation between the proportion of compromised devices and the degree of bias in the model's outputs. The findings indicate that the detection mechanism effectively reduces the impact of backdoor attacks, particularly when the number of compromised devices is relatively low. This research contributes to enhancing the robustness of federated learning systems against malicious manipulation, ensuring more reliable and unbiased model performance.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。