Abstract
BACKGROUND: The Personal Information Protection Law (PIPL) in China imposes strict requirements on personal data handling, particularly in educational contexts where teacher data privacy is critical. Traditional centralized machine learning approaches pose significant risks of data breaches and non-compliance. Federated Learning (FL) offers a promising decentralized alternative by enabling collaborative model training without sharing raw data. METHODS: This study combines quantitative simulations and qualitative compliance analysis to evaluate FL frameworks under PIPL principles, with a focus on Differential Privacy as the primary empirically validated mechanism for noise addition and privacy guarantee. Other techniques, such as Secure Multi-Party Computation (SMC), are analyzed theoretically for their alignment with PIPL requirements like data minimization, anonymization, and encrypted transmission. RESULTS: Experimental simulations demonstrate that FL effectively reduces data breach risks compared to centralized methods. It achieves principle-level compliance with PIPL through local data processing, differential privacy mechanisms, and secure aggregation, leading to improved privacy preservation while maintaining model performance. CONCLUSION: FL conceptually supports teacher data privacy protection under the PIPL framework. This study proposes a tailored compliance framework that integrates FL with privacy-enhancing technologies, offering theoretical foundations and practical recommendations for educational institutions and technology implementers to deploy privacy-preserving machine learning solutions.