Abstract
Federated Learning (FL) has emerged as a critical paradigm for enabling financial institutions to collaboratively train models while preserving data privacy. However, a fundamental challenge arises when applying Differential Privacy (DP) to financial risk management tasks characterized by severe class imbalance. The uniform noise injection of standard DP mechanisms disproportionately submerges the critical risk signals from rare events such as fraud or default. To address this, we introduce IVA-FL (Information-Value-Aware Federated Learning), a novel framework that shifts the privacy preservation paradigm from data-agnostic optimization to a value-driven approach. IVA-FL integrates three synergistic mechanisms: 1) an Information Value Scoring (IVS) mechanism that dynamically quantifies the importance of each sample based on real-time training loss; 2) an Adaptive Gradient Processing module that applies tailored clipping and smoothing strategies to rescue high-value gradients; and 3) an Adaptive Noise Injection mechanism that intelligently reallocates the scarce privacy budget to maximize the signal-to-noise ratio of critical risks. Comprehensive experiments on multiple financial datasets demonstrate that IVA-FL significantly outperforms state-of-the-art baselines in both standard metrics (Recall, AUC) and practical risk assessment indicators (KS Statistic, Brier Score), especially under strict privacy constraints. Furthermore, the framework exhibits exceptional robustness against Data Heterogeneity (Non-IID) and superior adaptability in advanced scenarios involving feature heterogeneity and concept drift. Our work presents a robust, high-utility solution for privacy-preserving financial risk management.