Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
As an emerging privacy-preservation machine learning framework, Federated Learning (FL) facilitates different clients to train a shared model collaboratively through exchanging and aggregating model parameters while raw data are kept local and private. When this learning framework is applied to Deep Reinforcement Learning (DRL), the resultant Federated Reinforcement Learning (FRL) can circumvent the heavy data sampling required in conventional DRL and benefit from diversified training data, besides privacy preservation offered by FL. Existing FRL implementations presuppose that clients have compatible tasks which a single global model can cover. In practice, however, clients usually have incompatible (different but still similar) personalized tasks, which we called task shift. It may severely hinder the implementation of FRL for practical applications. In this paper, we propose a Federated Meta Reinforcement Learning (FMRL) framework by integrating Model-Agnostic Meta-Learning (MAML) and FRL. Specifically, we innovatively utilize Proximal Policy Optimization (PPO) to fulfil multi-step local training with a single round of sampling. Moreover, considering the sensitivity of learning rate selection in FRL, we reconstruct the aggregation optimizer with the Federated version of Adam (Fed-Adam) on the server side. The experiments demonstrate that, in different environments, FMRL outperforms other FL methods with high training efficiency brought by Fed-Adam.
T. Ben-Nun and T. Hoefler, Demystifying parallel and distributed deep learning: An in-depth concurrency analysis, ACM Comput. Surv., vol. 52, no. 4, p. 65, 2019.
M. Langer, Z. He, W. Rahayu, and Y. Xue, Distributed training of deep learning models: A taxonomic perspective, IEEE Trans. Parallel Distrib. Syst., vol. 31, no. 12, pp. 2802–2818, 2020.
L. Gu, M. Cui, L. Xu, and X. Xu, Collaborative offloading method for digital twin empowered cloud edge computing on Internet of vehicles, Tsinghua Science and Technology, vol. 28, no. 3, pp. 433–451, 2023.
X. Zhou, W. Liang, K. I. K. Wang, and L. T. Yang, Deep correlation mining based on hierarchical hybrid networks for heterogeneous big data recommendations, IEEE Trans. Comput. Soc. Syst., vol. 8, no. 1, pp. 171–178, 2021.
V. François-Lavet, P. Henderson, R. Islam, M. G. Bellemare, and J. Pineau, An introduction to deep reinforcement learning, Found. Trends® Mach. Learn., vol. 11, nos. 3&4, pp. 219–354, 2018.
X. Zhou, W. Liang, K. Yan, W. Li, K. I. K. Wang, J. Ma, and Q. Jin, Edge-enabled two-stage scheduling based on deep reinforcement learning for Internet of everything, IEEE Internet Things J., vol. 10, no. 4, pp. 3295–3304, 2022.
N. C. Luong, D. T. Hoang, S. Gong, D. Niyato, P. Wang, Y. C. Liang, and D. I. Kim, Applications of deep reinforcement learning in communications and networking: A survey, IEEE Commun. Surv. Tutor., vol. 21, no. 4, pp. 3133–3174, 2019.
B. R. Kiran, I. Sobh, V. Talpaert, P. Mannion, A. A. Al Sallab, S. Yogamani, and P. Pérez, Deep reinforcement learning for autonomous driving: A survey, IEEE Trans. Intell. Transp. Syst., vol. 23, no. 6, pp. 4909–4926, 2022.
S. Liu, K. C. See, K. Y. Ngiam, L. A. Celi, X. Sun, and M. Feng, Reinforcement learning for clinical decision support in critical care: Comprehensive review, J. Med. Internet Res., vol. 22, no. 7, p. e18477, 2020.
S. Yu, X. Chen, Z. Zhou, X. Gong, and D. Wu, When deep reinforcement learning meets federated learning: Intelligent multitimescale resource management for multiaccess edge computing in 5G ultradense network, IEEE Internet Things J., vol. 8, no. 4, pp. 2238–2251, 2021.
X. Xia, F. Chen, Q. He, J. Grundy, M. Abdelrazek, and H. Jin, Online collaborative data caching in edge computing, IEEE Trans. Parallel Distrib. Syst., vol. 32, no. 2, pp. 281–294, 2021.
L. Yuan, Q. He, F. Chen, J. Zhang, L. Qi, X. Xu, Y. Xiang, and Y. Yang, CSEdge: Enabling collaborative edge storage for multi-access edge computing based on blockchain, IEEE Trans. Parallel Distrib. Syst., vol. 33, no. 8, pp. 1873–1887, 2022.
B. Liu, L. Wang, and M. Liu, Lifelong federated reinforcement learning: A learning architecture for navigation in cloud robotic systems, IEEE Robot. Autom. Lett., vol. 4, no. 4, pp. 4555–4562, 2019.
P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, et al., Advances and open problems in federated learning, Found. Trends® Mach. Learn., vol. 14, nos. 1&2, pp. 1–210, 2021.
J. Mills, J. Hu, and G. Min, Multi-task federated learning for personalised deep neural networks in edge computing, IEEE Trans. Parallel Distrib. Syst., vol. 33, no. 3, pp. 630–641, 2022.
Y. T. Huang, L. Y. Chu, Z. R. Zhou, L. J. Wang, J. C. Liu, J. Pei, and Y. Zhang, Personalized cross-silo federated learning on non-IID data, Proc. AAAI Conf. Artif. Intell., vol. 35, no. 9, pp. 7865–7873, 2021.
S. T. Tokdar and R. E. Kass, Importance sampling: A review, Wires Comput. Stat., vol. 2, no. 1, pp. 54–60, 2010.
710
Views
85
Downloads
3
Crossref
3
Web of Science
3
Scopus
0
CSCD
Altmetrics
The articles published in this open access journal are distributed under the terms of theCreative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).