Federated learning and differential privacy: Machine learning and deep learning for biomedical image data classification

联邦学习和差分隐私:用于生物医学图像数据分类的机器学习和深度学习

阅读:1

Abstract

BACKGROUND: The integration of differential privacy and federated learning in healthcare is key for maintaining patient confidentiality while ensuring accurate predictive modeling. With increasing concerns about privacy, it is essential to explore methods that protect data privacy without compromising model performance. OBJECTIVE: This study evaluates the effectiveness of feedforward neural networks (FNNs), Gaussian processes (GPs), and a subset of deep learning neural networks (MLP) in classifying biomedical image data, incorporating federated learning to enhance privacy preservation. METHOD: We implemented FNN, GP, and MLP models using federated learning and differential privacy techniques. Models were evaluated based on training and validation accuracy, correlation coefficients, mean absolute error (MAE), root mean squared error (RMSE), and relative errors, including relative absolute error (RAE) and relative root squared error (RRSE). RESULTS: The FNN achieved 86.49% training accuracy and 82.08% overall accuracy but showed potential overfitting with 68.75% validation accuracy. The GP model had a correlation coefficient of 0.9741, a MAE of 108.38, and a RMSE of 173.49. The DNN outperformed the other models with a correlation coefficient of 0.9980, a MAE of 36.80, and a RMSE of 51.01. Federated learning improved privacy while maintaining model performance. CONCLUSION: Federated learning with differential privacy offers a promising solution for secure and accurate biomedical image classification, supporting privacy-preserving machine learning in medical diagnostics without compromising performance.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。