Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
In crowdsourced federated learning, differential privacy is commonly used to prevent the aggregation server from recovering training data from the models uploaded by clients to achieve privacy preservation. However, improper privacy budget settings and perturbation methods will severely impact model performance. In order to achieve a harmonious equilibrium between privacy preservation and model performance, we propose a novel architecture for crowdsourced federated learning with personalized privacy preservation. In our architecture, to avoid the issue of poor model performance due to excessive privacy preservation requirements, we establish a two-stage dynamic game between the task requestor and clients to formulate the optimal privacy preservation strategy, allowing each client to independently control privacy preservation level. Additionally, we design a differential privacy perturbation mechanism based on weight priorities. It divides the weights based on their relevance with local data, applying different levels of perturbation to different types of weights. Finally, we conduct experiments on the proposed perturbation mechanism, and the experimental results indicate that our approach can achieve better global model performance with the same privacy budget.
G. Xu, H. Li, Y. Zhang, S. Xu, J. Ning, and R. H. Deng, Privacy-preserving federated deep learning with irregular users, IEEE Trans. Dependable Secure Comput., vol. 19, no. 2, pp. 1364–1381, 2022.
L. Zhang, J. Xu, P. Vijayakumar, P. K. Sharma, and U. Ghosh, Homomorphic encryption-based privacy-preserving federated learning in IoT-enabled healthcare system, IEEE Trans. Netw. Sci. Eng., vol. 10, no. 5, pp. 2864–2880, 2023.
C. Dwork and A. Roth, The algorithmic foundations of differential privacy, Found. Trends® Theor. Comput. Sci., vol. 9, nos. 3&4, pp. 211–407, 2013.
R. Hu, Y. Guo, H. Li, Q. Pei, and Y. Gong, Personalized federated learning with differential privacy, IEEE Internet Things J., vol. 7, no. 10, pp. 9530–9539, 2020.
X. Shen, H. Jiang, Y. Chen, B. Wang, and L. Gao, PLDP-FL: Federated learning with personalized local differential privacy, Entropy, vol. 25, no. 3, p. 485, 2023.
C. L. Hu, K. Y. Lin, and C. K. Chang, Incentive mechanism for mobile crowdsensing with two-stage stackelberg game, IEEE Trans. Serv. Comput., vol. 16, no. 3, pp. 1904–1918, 2023.
J. Zhao, K. Mao, C. Huang, and Y. Zeng, Utility optimization of federated learning with differential privacy, Discrete Dyn. Nat. Soc., vol. 2021, p. 3344862, 2021.
H. Zhou, G. Yang, H. Dai, and G. Liu, PFLF: privacy-preserving federated learning framework for edge computing, IEEE Trans. Inf. Forensics Secur., vol. 17, pp. 1905–1918, 2022.
This work is available under the CC BY-NC-ND 3.0 IGO license:https://creativecommons.org/licenses/by-nc-nd/3.0/igo/