Abstract
INTRODUCTION: Federated Learning (FL) has become an attractive approach for e-health because it allows multiple institutions to collaboratively train machine learning models without directly sharing sensitive patient data. Despite these advantages, FL systems are still susceptible to poisoning attacks in which malicious participants manipulate model updates to degrade performance or embed hidden backdoors. Such threats raise serious concerns for medical applications, where reliability, transparency, and regulatory compliance are essential. METHODS: In this work, we introduce FedSecure-Chain, a modular framework designed to improve the reliability of federated learning environments. The proposed approach combines three phases: an anomaly detection stage applied before aggregation to identify suspicious client updates, a robust aggregation strategy to limit the influence of potentially malicious contributions, and a lightweight blockchain layer that records model updates and client trust information to ensure traceability and auditing. The framework was evaluated on Breast Cancer datasets using TabNet and compact multilayer perceptron (MLP) models under several poisoning attack scenarios and different non-IID data distributions. RESULTS: The experimental evaluation indicates that integrating anomaly detection with robust aggregation significantly reduces the impact of poisoning attacks on the global model. In addition, the blockchain logging layer enables transparent tracking of model updates while introducing only limited overhead. Overall, the proposed framework maintains stable model performance even in the presence of adversarial participants. DISCUSSION: The results suggest that combining defensive learning strategies with transparent logging mechanisms can strengthen trust in federated healthcare systems. By improving resilience to adversarial manipulation while keeping computational and operational costs manageable, Our method represents a practical step toward secure and trustworthy federated learning for healthcare applications.