Abstract
This paper presents HealthFedDP, a novel bidirectional adaptive differential privacy framework for federated learning in healthcare applications. Bidirectional adaptive differential privacy refers to our two-way privacy protection approach that dynamically adjusts noise levels between healthcare providers and the central server. The dual-layer noise injection mechanism adds calibrated noise to client parameters and server aggregations, with noise levels that adapt based on data sensitivity. The proposed framework enables collaborative training of AI models while maintaining strict patient privacy through a dual-layer noise injection mechanism, where healthcare providers participate in model training using local patient data, with adaptive noise added to both client and server parameters to prevent information leakage. To optimize performance while preserving privacy, we incorporate gradient sampling techniques and utilize RMSprop optimization at healthcare providers and the central server. Experimental results on the MIMIC-III and eICU healthcare datasets demonstrate that HealthFedDP achieves a 10.65% higher accuracy than the best baseline method, requiring only 81 communication rounds versus the best baseline method's 700 rounds. Furthermore, the framework shows particular strength in protecting sensitive clinical features, with information leakage consistently maintained below 0.051% across various attack scenarios.
京公网安备11010802044758号
Comments on this article