AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (8.2 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Crowdsourced federated learning architecture with personalized privacy preservation

State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China
State Grid Beijing Electric Power Company, Beijing 100031, China
Show Author Information

Abstract

In crowdsourced federated learning, differential privacy is commonly used to prevent the aggregation server from recovering training data from the models uploaded by clients to achieve privacy preservation. However, improper privacy budget settings and perturbation methods will severely impact model performance. In order to achieve a harmonious equilibrium between privacy preservation and model performance, we propose a novel architecture for crowdsourced federated learning with personalized privacy preservation. In our architecture, to avoid the issue of poor model performance due to excessive privacy preservation requirements, we establish a two-stage dynamic game between the task requestor and clients to formulate the optimal privacy preservation strategy, allowing each client to independently control privacy preservation level. Additionally, we design a differential privacy perturbation mechanism based on weight priorities. It divides the weights based on their relevance with local data, applying different levels of perturbation to different types of weights. Finally, we conduct experiments on the proposed perturbation mechanism, and the experimental results indicate that our approach can achieve better global model performance with the same privacy budget.

References

[1]
D. Feng, C. Helena, W. Y. B. Lim, J. S. Ng, H. Jiang, Z. Xiong, J. Kang, H. Yu, D. Niyato, and C. Miao, CrowdFL: A marketplace for crowdsourced federated learning, in Proc. 36th AAAI Conf. Artificial Intelligence, Virtual, 2022, pp. 13164–13166.
[2]
H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. Y. Arcas, Communication-efficient learning of deep networks from decentralized data, in Proc. 20th Int. Conf. Artificial Intelligence and Statistics, Ft. Lauderdale, FL, USA, 2017, pp. 1273–1282.
[3]
L. Zhu, Z. Liu, and S. Han, Deep leakage from gradients, in Proc. 33rd Conf. Neural Information Processing Systems, Vancouver, Canada, 2019.
[4]
J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller, Inverting Gradients: How easy is it to break privacy in federated learning? arXiv preprint arXiv: 2003.14053, 2020.
[5]

G. Xu, H. Li, Y. Zhang, S. Xu, J. Ning, and R. H. Deng, Privacy-preserving federated deep learning with irregular users, IEEE Trans. Dependable Secure Comput., vol. 19, no. 2, pp. 1364–1381, 2022.

[6]

L. Zhang, J. Xu, P. Vijayakumar, P. K. Sharma, and U. Ghosh, Homomorphic encryption-based privacy-preserving federated learning in IoT-enabled healthcare system, IEEE Trans. Netw. Sci. Eng., vol. 10, no. 5, pp. 2864–2880, 2023.

[7]
Z. Jiang, W. Wang, and Y. Liu, FLASHE: additively symmetric homomorphic encryption for cross-silo federated learning, arXiv preprint arXiv: 2109.00675, 2021.
[8]
C. Zhang, S. Ekanut, L. Zhen, and Z. Li, Augmented multi-party computation against gradient leakage in federated learning, IEEE Trans. Big Data, DOI: 10.1109/TBDATA.2022.3208736.
[9]

C. Dwork and A. Roth, The algorithmic foundations of differential privacy, Found. Trends® Theor. Comput. Sci., vol. 9, nos. 3&4, pp. 211–407, 2013.

[10]
S. Weng, L. Zhang, D. Feng, C. Feng, R. Wang, P. V. Klaine, and M. Ali Imran, Privacy-preserving federated learning based on differential privacy and momentum gradient descent, in Proc. Int. Joint Conf. Neural Networks (IJCNN), Padua, Italy, 2022, pp. 1–6.
[11]
L. Sun, J. Qian, and X. Chen, LDP-FL: Practical private aggregation in federated learning with local differential privacy, in Proc. 30th Int. Joint Conf. Artificial Intelligence, Montreal, Canada, 2021, pp. 1571–1578.
[12]

R. Hu, Y. Guo, H. Li, Q. Pei, and Y. Gong, Personalized federated learning with differential privacy, IEEE Internet Things J., vol. 7, no. 10, pp. 9530–9539, 2020.

[13]
Z. Jorgensen, T. Yu, and G. Cormode, Conservative or liberal? Personalized differential privacy, in Proc. IEEE 31st Int. Conf. Data Engineering, Seoul, Republic of Korea, 2015, pp. 1023–1034.
[14]
J. Zhou, Z. Su, J. Ni, Y. Wang, Y. Pan, and R. Xing, Personalized privacy-preserving federated learning: Optimized trade-off between utility and privacy, in Proc. GLOBECOM 2022-2022 IEEE Global Communications Conf., Rio de Janeiro, Brazil, 2022, pp. 4872–4877.
[15]

X. Shen, H. Jiang, Y. Chen, B. Wang, and L. Gao, PLDP-FL: Federated learning with personalized local differential privacy, Entropy, vol. 25, no. 3, p. 485, 2023.

[16]

C. L. Hu, K. Y. Lin, and C. K. Chang, Incentive mechanism for mobile crowdsensing with two-stage stackelberg game, IEEE Trans. Serv. Comput., vol. 16, no. 3, pp. 1904–1918, 2023.

[17]

J. Zhao, K. Mao, C. Huang, and Y. Zeng, Utility optimization of federated learning with differential privacy, Discrete Dyn. Nat. Soc., vol. 2021, p. 3344862, 2021.

[18]
J. Fu, Z. Chen, and X. Han, Adap DP-FL: Differentially private federated learning with adaptive noise, in Proc. IEEE Int. Conf. Trust, Security and Privacy in Computing and Communications (TrustCom), Wuhan, China, 2022, pp. 656–663.
[19]
M. Denil, B. Shakibi, L. Dinh, M. Ranzato, N. de Freitas, in Proc. 27th Conf. Neural Information Processing Systems (NIPS 2013), Lake Tahoe, NV, USA, 2013, pp. 2148–2156.
[20]
L. Wang, W. Wang, and B. Li, CMFL: Mitigating communication overhead for federated learning, in 2019 IEEE 39th Int. Conf. Distributed Computing Systems (ICDCS), Dallas, TX, USA, 2019, pp. 954–964.
[21]
Z. Zhang, Y. Li, J. Wang, B. Liu, D. Li, Y. Guo, X. Chen, and Y. Liu, ReMoS: Reducing defect inheritance in transfer learning via relevant model slicing, in 2022 IEEE/ACM 44th Int. Conf. Software Engineering (ICSE), Pittsburgh, PA, USA, 2022, pp. 1856–1868.
[22]
K. Simonyan and A. Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv: 1409.1556, 2014.
[23]

H. Zhou, G. Yang, H. Dai, and G. Liu, PFLF: privacy-preserving federated learning framework for edge computing, IEEE Trans. Inf. Forensics Secur., vol. 17, pp. 1905–1918, 2022.

Intelligent and Converged Networks
Pages 192-206
Cite this article:
Xu Y, Qiu X, Zhang F, et al. Crowdsourced federated learning architecture with personalized privacy preservation. Intelligent and Converged Networks, 2024, 5(3): 192-206. https://doi.org/10.23919/ICN.2024.0014

141

Views

22

Downloads

0

Crossref

0

Scopus

Altmetrics

Received: 31 October 2023
Revised: 04 January 2024
Accepted: 05 March 2024
Published: 30 September 2024
© All articles included in the journal are copyrighted to the ITU and TUP.

This work is available under the CC BY-NC-ND 3.0 IGO license:https://creativecommons.org/licenses/by-nc-nd/3.0/igo/

Return