A robust and verifiable federated learning framework for preventing data poisonous threats in e-health

一种稳健且可验证的联邦学习框架,用于预防电子健康领域的数据恶意威胁

阅读:1

Abstract

INTRODUCTION: Federated Learning (FL) has become an attractive approach for e-health because it allows multiple institutions to collaboratively train machine learning models without directly sharing sensitive patient data. Despite these advantages, FL systems are still susceptible to poisoning attacks in which malicious participants manipulate model updates to degrade performance or embed hidden backdoors. Such threats raise serious concerns for medical applications, where reliability, transparency, and regulatory compliance are essential. METHODS: In this work, we introduce FedSecure-Chain, a modular framework designed to improve the reliability of federated learning environments. The proposed approach combines three phases: an anomaly detection stage applied before aggregation to identify suspicious client updates, a robust aggregation strategy to limit the influence of potentially malicious contributions, and a lightweight blockchain layer that records model updates and client trust information to ensure traceability and auditing. The framework was evaluated on Breast Cancer datasets using TabNet and compact multilayer perceptron (MLP) models under several poisoning attack scenarios and different non-IID data distributions. RESULTS: The experimental evaluation indicates that integrating anomaly detection with robust aggregation significantly reduces the impact of poisoning attacks on the global model. In addition, the blockchain logging layer enables transparent tracking of model updates while introducing only limited overhead. Overall, the proposed framework maintains stable model performance even in the presence of adversarial participants. DISCUSSION: The results suggest that combining defensive learning strategies with transparent logging mechanisms can strengthen trust in federated healthcare systems. By improving resilience to adversarial manipulation while keeping computational and operational costs manageable, Our method represents a practical step toward secure and trustworthy federated learning for healthcare applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。