AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (1.6 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Deep reinforcement learning based computation offloading and resource allocation for low-latency fog radio access networks

Key Laboratory of Universal Wireless Communications (Ministry of Education), Beijing University of Posts and Telecommunications, Beijing 100876, China
Show Author Information

Abstract

Fog Radio Access Networks (F-RANs) have been considered a groundbreaking technique to support the services of Internet of Things by leveraging edge caching and edge computing. However, the current contributions in computation offloading and resource allocation are inefficient; moreover, they merely consider the static communication mode, and the increasing demand for low latency services and high throughput poses tremendous challenges in F-RANs. A joint problem of mode selection, resource allocation, and power allocation is formulated to minimize latency under various constraints. We propose a Deep Reinforcement Learning (DRL) based joint computation offloading and resource allocation scheme that achieves a suboptimal solution in F-RANs. The core idea of the proposal is that the DRL controller intelligently decides whether to process the generated computation task locally at the device level or offload the task to a fog access point or cloud server and allocates an optimal amount of computation and power resources on the basis of the serving tier. Simulation results show that the proposed approach significantly minimizes latency and increases throughput in the system.

References

[1]
B. Cao, Y. X. Li, L. Zhang, L. Zhang, S. Mumtaz, Z. Y. Zhou, and M. G. Peng, When internet of things meets blockchain: Challenges in distributed consensus, IEEE Netw., vol. 33, no. 6, pp. 133-139, 2019.
[2]
M. H. Min, L. Xiao, Y. Chen, P. Cheng, D. Wu, and W. H. Zhuang, Learning-based computation offloading for IoT devices with energy harvesting, IEEE Trans. Veh. Technol., vol. 68, no. 2, pp. 1930-1941, 2019.
[3]
J. H. Zhao, Q. P. Li, Y. Gong, and K. Zhang, Computation offloading and resource allocation for cloud assisted mobile edge computing in vehicular networks, IEEE Trans. Veh. Technol., vol. 68, no. 8, pp. 7944-7956, 2019.
[4]
B. Cao, L. Zhang, Y. Li, D. Q. Feng, and W. Cao, Intelligent offloading in multi-access edge computing: A state-of-the-art review and framework, IEEE Commun. Mag., vol. 57, no. 3, pp. 56-62, 2019.
[5]
M. G. Peng, S. Yan, K. C. Zhang, and C. G. Wang, Fog-computing-based radio access networks: Issues and challenges, IEEE Netw., vol. 30, no. 4, pp. 46-53, 2016.
[6]
T. Dang and M. G. Peng, Joint radio communication, caching, and computing design for mobile virtual reality delivery in fog radio access networks, IEEE J. Sel. Areas Commun., vol. 37, no. 7, pp. 1594-1607, 2019.
[7]
L. X. Chen, P. Zhou, L. Gao, and J. Xu, Adaptive fog configuration for the industrial internet of things, IEEE Trans. Ind. Informat., vol. 14, no. 10, pp. 4656-4664, 2018.
[8]
H. Y. Xiang, M. G. Peng, Y. H. Sun, and S. Yan, Mode selection and resource allocation in sliced fog radio access networks: A reinforcement learning approach, IEEE Trans. Veh. Technol., vol. 69, no. 4, pp. 4271-4284, 2020.
[9]
Y. H. Sun, M. G. Peng, and S. W. Mao, Deep reinforcement learning-based mode selection and resource management for green fog radio access networks, IEEE Internet Things J., vol. 6, no. 2, pp. 1960-1971, 2019.
[10]
L. T. Tan and R. Q. Hu, Mobility-aware edge caching and computing in vehicle networks: A deep reinforcement learning, IEEE Trans. Veh. Technol., vol. 67, no. 11, pp. 10 190-10 203, 2018.
[11]
Y. F. Wei, F. R. Yu, M. Song, and Z. Han, Joint optimization of caching, computing, and radio resources for fog-enabled IoT using natural actor-critic deep reinforcement learning, IEEE Internet Things J., vol. 6, no. 2, pp. 2061-2073, 2019.
[12]
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al., Human-level control through deep reinforcement learning, Nature, vol. 518, no. 7540, pp. 529-533, 2015.
[13]
C. Wu, T. Yoshinaga, Y. S. Ji, T. Murase, and Y. Zhang, A reinforcement learning-based data storage scheme for vehicular ad hoc networks, IEEE Trans. Veh. Technol., vol. 66, no. 7, pp. 6336-6348, 2017.
[14]
B. H. Liu, C. X. Liu, M. G. Peng, Y. Q. Liu, and S. Yan, Resource allocation for non-orthogonal multiple access-enabled fog radio access networks, IEEE Trans. Wirel. Commun., vol. 19, no. 6, pp. 3867-3878, 2020.
[15]
B. H. Liu, C. X. Liu, and M. G. Peng, Resource allocation for energy-efficient MEC in NOMA-enabled massive IoT networks, IEEE J. Sel. Areas Commun., .
[16]
L. Q. Liu, Z. Chang, X. J. Guo, S. W. Mao, and T. Ristaniemi, Multiobjective optimization for computation offloading in fog computing, IEEE Internet Things J., vol. 5, no. 1, pp. 283-294, 2018.
[17]
Z. Y. Zhao, S. Q. Bu, T. Z. Zhao, Z. P. Yin, M. G. Peng, Z. G. Ding, and T. Q. S. Quek, On the design of computation offloading in fog radio access networks, IEEE Trans. Veh. Technol., vol. 68, no. 7, pp. 7136-7149, 2019.
[18]
L. Zhang, B. Cao, Y. Li, M. G. Peng, and G. Feng, A multi-stage stochastic programming based offloading policy for fog enabled IoT-eHealth, IEEE J. Sel. Areas Commun., .
[19]
J. B. Du, L. Q. Zhao, J. Feng, and X. L. Chu, Computation offloading and resource allocation in mixed fog/cloud computing systems with min-max fairness guarantee, IEEE Trans. Commun., vol. 66, no. 4, pp. 1594-1608, 2018.
[20]
Y. M. Liu, F. R. Yu, X. Li, H. Ji, and V. C. M. Leung, Distributed resource allocation and computation offloading in fog and cloud networks with non-orthogonal multiple access, IEEE Trans. Veh. Technol., vol. 67, no. 12, pp. 12 137-12 151, 2018.
[21]
B. Cao, S. C. Xia, J. W. Han, and Y. Li, A distributed game methodology for crowdsensing in uncertain wireless scenario, IEEE Trans. Mobile Comput., vol. 19, no. 1, pp. 15-28, 2020.
[22]
Y. H. Sun, M. G. Peng, Y. C. Zhou, Y. Z. Huang, and S. W. Mao, Application of machine learning in wireless networks: Key techniques and open issues, IEEE Commun. Surv. Tutor., vol. 21, no. 4, pp. 3072-3108, 2019.
[23]
L. Lei, H. J. Xu, X. Xiong, K. Zheng, W. Xiang, and X. B. Wang, Multi-user resource control with deep reinforcement learning in IoT edge computing, arXiv preprint arXiv: 1906.07860, 2019.
[24]
X. F. Chen, H. G. Zhang, C. Wu, S. W. Mao, Y. S. Ji, and M. Bennis, Optimized computation offloading performance in virtual edge computing systems via deep reinforcement learning, IEEE Internet Things J., vol. 6, no. 3, pp. 4005-4018, 2019.
[25]
L. Huang, S. Z. Bi, and Y. J. A. Zhang, Deep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networks, IEEE Trans. Mobile Comput., vol. 19, no. 311, pp. 2581-2593, 2020.
[26]
Y. Liu, H. M. Yu, S. L. Xie, and Y. Zhang, Deep reinforcement learning for offloading and resource allocation in vehicle edge computing and networks, IEEE Trans. Veh. Technol., vol. 68, no. 11, pp. 11 158-11 168, 2019.
[27]
H. Ye, G. Y. Li, and B. H. F. Juang, Deep reinforcement learning based resource allocation for V2V communications, IEEE Trans. Veh. Technol., vol. 68, no. 4, pp. 3163-3173, 2019.
[28]
R. P. Li, Z. F. Zhao, Q. Sun, C. L. I, C. Y. Yang, X. F. Chen, M. J. Zhao, H. G. Zhang, Deep reinforcement learning for resource management in network slicing, IEEE Access, vol. 6, pp. 74 429-74 441, 2018.
[29]
F. Meng, P. Chen, L. N. Wu, and J. L. Cheng, Senior, power allocation in multi-user cellular networks: Deep reinforcement learning approaches, IEEE Trans. Wirel. Commun., vol. 19, no. 10, pp. 6255-6267, 2020.
[30]
X. J. Li, J. Fang, W. Cheng, H. P. Duan, Z. Chen, and H. B. Li, Intelligent power control for spectrum sharing in cognitive radios: A deep reinforcement learning approach, IEEE Access, vol. 6, pp. 25 463-25 473, 2018.
[31]
X. M. He, K. Wang, H. W. Huang, T. Miyazaki, Y. X. Wang, and S. Guo, Green resource allocation based on deep reinforcement learning in content-centric IoT, IEEE Trans. Emerg. Top. Comput., vol. 8, no. 3, pp. 781-796, 2020.
[32]
A. Yousefpour, G. Ishigaki, R. Gour, and J. P. Jue, On reducing IoT service delay via fog offloading, IEEE Internet Things J., vol. 5, no. 2, pp. 998-1010, 2018.
[33]
R. S. Sutton and A. G. Barto. Introduction to Reinforcement Learning, 2nd ed. Cambridge, MA, USA: MIT Press, 1998.
[34]
H. Zhu, Y. Cao, X. Wei, W. Wang, T. Jiang, and S. Jin, Caching transient data for internet of things: A deep reinforcement learning approach, IEEE Internet Things J., vol. 6, no. 2, pp. 2074-2083, 2019.
Intelligent and Converged Networks
Pages 243-257
Cite this article:
Shafiqur Rahman GM, Dang T, Ahmed M. Deep reinforcement learning based computation offloading and resource allocation for low-latency fog radio access networks. Intelligent and Converged Networks, 2020, 1(3): 243-257. https://doi.org/10.23919/ICN.2020.0020

1394

Views

98

Downloads

63

Crossref

73

Scopus

Altmetrics

Received: 17 October 2020
Revised: 02 November 2020
Accepted: 28 November 2020
Published: 30 December 2020
© All articles included in the journal are copyrighted to the ITU and TUP 2020

© All articles included in the journal are copyrighted to the ITU and TUP. This work is available under the CC BY-NC-ND 3.0 IGO license: https://creativecommons.org/licenses/by-nc-nd/3.0/igo/.

Return