Journal Home > Volume 6 , Issue 3

The application of machine learning techniques, particularly in the context of autonomous driving solutions, has grown exponentially in recent years. As such, the collection of high-quality datasets has become a prerequisite for training new models. However, concerns about privacy and data usage have led to a growing demand for decentralized methods that can be learned without the need for pre-collected data. Federated learning (FL) offers a potential solution to this problem by enabling individual clients to contribute to the learning process by sending model updates rather than training data. While Federated Learning has proven successful in many cases, new challenges have emerged, especially in terms of network availability during training. Since a global instance is responsible for collecting updates from local clients, there is a risk of network downtime if the global server fails. In this study, we propose a novel and crucial concept that addresses this issue by adding redundancy to our network. Rather than deploying a single global model, we deploy a multitude of global models and utilize consensus algorithms to synchronize and keep these replicas updated. By utilizing these replicas, even if the global instance fails, the network remains available. As a result, our solution enables the development of reliable Federated Learning systems, particularly in system architectures suitable for infrastructure-enhanced autonomous driving. Consequently, our findings enable the more effective realization of use cases in the context of cooperative, connected, and automated mobility.


menu
Abstract
Full text
Outline
About this article

Ensuring federated learning reliability for infrastructure-enhanced autonomous driving

Show Author's information Benjamin Acar( )Marius Sterling
Technical University of Berlin, Berlin, 10623, Germany

Abstract

The application of machine learning techniques, particularly in the context of autonomous driving solutions, has grown exponentially in recent years. As such, the collection of high-quality datasets has become a prerequisite for training new models. However, concerns about privacy and data usage have led to a growing demand for decentralized methods that can be learned without the need for pre-collected data. Federated learning (FL) offers a potential solution to this problem by enabling individual clients to contribute to the learning process by sending model updates rather than training data. While Federated Learning has proven successful in many cases, new challenges have emerged, especially in terms of network availability during training. Since a global instance is responsible for collecting updates from local clients, there is a risk of network downtime if the global server fails. In this study, we propose a novel and crucial concept that addresses this issue by adding redundancy to our network. Rather than deploying a single global model, we deploy a multitude of global models and utilize consensus algorithms to synchronize and keep these replicas updated. By utilizing these replicas, even if the global instance fails, the network remains available. As a result, our solution enables the development of reliable Federated Learning systems, particularly in system architectures suitable for infrastructure-enhanced autonomous driving. Consequently, our findings enable the more effective realization of use cases in the context of cooperative, connected, and automated mobility.

Keywords: autonomous driving, federated learning (FL), Kubernetes

References(32)

[1]
Alpaydin, E., 2021. Machine Learning. Cambridge, MA: The MIT Press, 2021.
DOI
[2]
Augusto, M. G., Hessler, A., Keiser, J., Masuch, N., Albayrak, S., 2021. Towards intelligent infrastructures and AI-driven platform ecosystems for connected and automated mobility solutions. https://itsworldcongress.com/the-book-of-abstracts
[3]
Awan, S., Li, F., Luo, B., Liu, M., 2019. Poster: A reliable and accountable privacy-preserving federated learning framework using the blockchain. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2561–2563.
DOI
[4]

Bernstein, D., 2014. Containers and cloud: From LXC to docker to kubernetes. IEEE Cloud Comput, 1, 81–84.

[5]
Burns, B., Beda, J., Hightower, K., 2022. Kubernetes: Up and running. 3rd edn. Sebastopol, USA: O’Reilly Media, Inc., 1–202.
[6]
Chen, S., Tang, X., Wang, H., Zhao, H., Guo, M., 2016. Towards scalable and reliable in-memory storage system: A case study with redis. In: 2016 IEEE Trustcom/BigDataSE/ISPA, 1660–1667.
DOI
[7]
Docker, 2022. https://www.docker.com
[8]
Ferreira, A. P., Sinnott, R., 2019. A performance evaluation of containers running on managed kubernetes services. In: 2019 IEEE International Conference on Cloud Computing Technology and Science (CloudCom), 199–208.
[9]
Cloud Native Computing Foundation, 2022. CNCF Annual Survey 2021. https://www.cncf.io/reports/cncf-annual-survey-2021
[10]
Ghemawat, S., Gobioff, H., Leung, S. T., 2003. The Google file system. In: Proceedings of the nineteenth ACM symposium on Operating systems principles, 29–43.
DOI
[11]
Golosova, J., Romanovs, A., 2018. The advantages and disadvantages of the blockchain technology. In: 2018 IEEE 6th Workshop on Advances in Information, Electronic and Electrical Engineering (AIEEE), 1–6.
DOI
[12]
Google, 2022. Kubernetes. https://kubernetes.io/de
[13]
Hausenblas, M., 2022. Kubernetes: State and Storage. https://cloud.redhat.com/blog/kubernetes-state-storage
[14]

Kairouz, P., Brendan McMahan, H., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., et al., 2021. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14, 1–210.

[15]
Khan, L. U., Tun, Y. K., Alsenwi, M., Imran, M., Han, Z., Hong, C. S., 2022. A dispersed federated learning framework for 6G-enabled autonomous driving cars. IEEE Trans Netw Sci Eng, 1–12.
DOI
[16]
Kim, J., Kim, D., Lee, J., 2021. Design and implementation of kubernetes enabled federated learning platform. In: 2021 International Conference on Information and Communication Technology Convergence (ICTC), 410–412.
DOI
[17]
Kubernetes, 2022. https://kind.sigs.k8s.io
[18]

Li, Y., Tao, X., Zhang, X., Liu, J., Xu, J., 2022. Privacy-preserved federated learning for autonomous driving. IEEE Trans Intell Transport Syst, 23, 8423–8434.

[19]

Liu, Y., Fan, T., Chen, T., Xu, Q., Yang, Q., 2021. FATE: An Industrial Grade Platform for Collaborative Learning With Data Protection. J Mach Learn Res, 22, 1–6.

[20]
McMahan, H. B., Moore, E., Ramage, D., Hampson, S., Arcas, B. A. Y., 2016. Communication-efficient learning of deep networks from decentralized data. https://arxiv.org/abs/1602.05629.pdf
[21]
Memtier-benchmark, 2022. https://github.com/RedisLabs/memtier_benchmark
[22]
Nakanoya, M., Im, J., Qiu, H., Katti, S., Pavone, M., Chinchali, S., 2021. Personalized federated learning of driver prediction models for autonomous driving. https://arxiv.org/abs/2112.00956.pdf
[23]
Nguyen, A., Do, T., Tran, M., Nguyen, B. X., Duong, C., Phan, T., et al., 2022. Deep federated learning for autonomous driving. In: 2022 IEEE Intelligent Vehicles Symposium (IV), 1824–1830.
DOI
[24]
Diego, O., Ousterhout, J., 2015. The raft consensus algorithm. Lecture Notes CS 190.
[25]
Pokhrel, S. R., Choi, J., 2020a. A decentralized federated learning approach for connected autonomous vehicles. In: 2020 IEEE Wireless Communications and Networking Conference Workshops (WCNCW), 1–6.
DOI
[26]

Pokhrel, S. R., Choi, J., 2020. Federated learning with blockchain for autonomous vehicles: Analysis and design challenges. IEEE Trans Commun, 68, 4734–4746.

[27]
Redis, 2022. https://redis.io
[28]
Xie, X. L., Wang, P., Wang, Q., 2017. The performance analysis of Docker and rkt based on Kubernetes. In: 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), 2137–2141.
DOI
[29]

Yang, Q., Liu, Y., Cheng, Y., Kang, Y., Chen, T., Yu, H., 2019. Federated learning. Synth Lect Artif Intell Mach Learn, 13, 1–207.

[30]
Zhang, H., Bosch, J., Olsson, H. H., 2021. End-to-end federated learning for autonomous driving vehicles. In: 2021 International Joint Conference on Neural Networks (IJCNN), 1–8.
DOI
[31]

Zhao, Z., Xia, J., Fan, L., Lei, X., Karagiannidis, G. K., Nallanathan, A., 2022. System optimization of federated learning networks with a constrained latency. IEEE Trans Veh Technol, 71, 1095–1100.

[32]

Zhuang, W., Gan, X., Wen, Y., Zhang, S., 2022. EasyFL: A low-code federated learning platform for dummies. IEEE Internet Things J, 9, 13740–13754.

Publication history
Copyright
Rights and permissions

Publication history

Received: 07 March 2023
Accepted: 10 May 2023
Published: 30 September 2023
Issue date: September 2023

Copyright

© The author(s) 2023.

Rights and permissions

This is an open access article under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return