References(34)
[1]
B. Cao, Y. X. Li, L. Zhang, L. Zhang, S. Mumtaz, Z. Y. Zhou, and M. G. Peng, When internet of things meets blockchain: Challenges in distributed consensus, IEEE Netw., vol. 33, no. 6, pp. 133-139, 2019.
[2]
M. H. Min, L. Xiao, Y. Chen, P. Cheng, D. Wu, and W. H. Zhuang, Learning-based computation offloading for IoT devices with energy harvesting, IEEE Trans. Veh. Technol., vol. 68, no. 2, pp. 1930-1941, 2019.
[3]
J. H. Zhao, Q. P. Li, Y. Gong, and K. Zhang, Computation offloading and resource allocation for cloud assisted mobile edge computing in vehicular networks, IEEE Trans. Veh. Technol., vol. 68, no. 8, pp. 7944-7956, 2019.
[4]
B. Cao, L. Zhang, Y. Li, D. Q. Feng, and W. Cao, Intelligent offloading in multi-access edge computing: A state-of-the-art review and framework, IEEE Commun. Mag., vol. 57, no. 3, pp. 56-62, 2019.
[5]
M. G. Peng, S. Yan, K. C. Zhang, and C. G. Wang, Fog-computing-based radio access networks: Issues and challenges, IEEE Netw., vol. 30, no. 4, pp. 46-53, 2016.
[6]
T. Dang and M. G. Peng, Joint radio communication, caching, and computing design for mobile virtual reality delivery in fog radio access networks, IEEE J. Sel. Areas Commun., vol. 37, no. 7, pp. 1594-1607, 2019.
[7]
L. X. Chen, P. Zhou, L. Gao, and J. Xu, Adaptive fog configuration for the industrial internet of things, IEEE Trans. Ind. Informat., vol. 14, no. 10, pp. 4656-4664, 2018.
[8]
H. Y. Xiang, M. G. Peng, Y. H. Sun, and S. Yan, Mode selection and resource allocation in sliced fog radio access networks: A reinforcement learning approach, IEEE Trans. Veh. Technol., vol. 69, no. 4, pp. 4271-4284, 2020.
[9]
Y. H. Sun, M. G. Peng, and S. W. Mao, Deep reinforcement learning-based mode selection and resource management for green fog radio access networks, IEEE Internet Things J., vol. 6, no. 2, pp. 1960-1971, 2019.
[10]
L. T. Tan and R. Q. Hu, Mobility-aware edge caching and computing in vehicle networks: A deep reinforcement learning, IEEE Trans. Veh. Technol., vol. 67, no. 11, pp. 10 190-10 203, 2018.
[11]
Y. F. Wei, F. R. Yu, M. Song, and Z. Han, Joint optimization of caching, computing, and radio resources for fog-enabled IoT using natural actor-critic deep reinforcement learning, IEEE Internet Things J., vol. 6, no. 2, pp. 2061-2073, 2019.
[12]
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al., Human-level control through deep reinforcement learning, Nature, vol. 518, no. 7540, pp. 529-533, 2015.
[13]
C. Wu, T. Yoshinaga, Y. S. Ji, T. Murase, and Y. Zhang, A reinforcement learning-based data storage scheme for vehicular ad hoc networks, IEEE Trans. Veh. Technol., vol. 66, no. 7, pp. 6336-6348, 2017.
[14]
B. H. Liu, C. X. Liu, M. G. Peng, Y. Q. Liu, and S. Yan, Resource allocation for non-orthogonal multiple access-enabled fog radio access networks, IEEE Trans. Wirel. Commun., vol. 19, no. 6, pp. 3867-3878, 2020.
[15]
B. H. Liu, C. X. Liu, and M. G. Peng, Resource allocation for energy-efficient MEC in NOMA-enabled massive IoT networks, IEEE J. Sel. Areas Commun., .
[16]
L. Q. Liu, Z. Chang, X. J. Guo, S. W. Mao, and T. Ristaniemi, Multiobjective optimization for computation offloading in fog computing, IEEE Internet Things J., vol. 5, no. 1, pp. 283-294, 2018.
[17]
Z. Y. Zhao, S. Q. Bu, T. Z. Zhao, Z. P. Yin, M. G. Peng, Z. G. Ding, and T. Q. S. Quek, On the design of computation offloading in fog radio access networks, IEEE Trans. Veh. Technol., vol. 68, no. 7, pp. 7136-7149, 2019.
[18]
L. Zhang, B. Cao, Y. Li, M. G. Peng, and G. Feng, A multi-stage stochastic programming based offloading policy for fog enabled IoT-eHealth, IEEE J. Sel. Areas Commun., .
[19]
J. B. Du, L. Q. Zhao, J. Feng, and X. L. Chu, Computation offloading and resource allocation in mixed fog/cloud computing systems with min-max fairness guarantee, IEEE Trans. Commun., vol. 66, no. 4, pp. 1594-1608, 2018.
[20]
Y. M. Liu, F. R. Yu, X. Li, H. Ji, and V. C. M. Leung, Distributed resource allocation and computation offloading in fog and cloud networks with non-orthogonal multiple access, IEEE Trans. Veh. Technol., vol. 67, no. 12, pp. 12 137-12 151, 2018.
[21]
B. Cao, S. C. Xia, J. W. Han, and Y. Li, A distributed game methodology for crowdsensing in uncertain wireless scenario, IEEE Trans. Mobile Comput., vol. 19, no. 1, pp. 15-28, 2020.
[22]
Y. H. Sun, M. G. Peng, Y. C. Zhou, Y. Z. Huang, and S. W. Mao, Application of machine learning in wireless networks: Key techniques and open issues, IEEE Commun. Surv. Tutor., vol. 21, no. 4, pp. 3072-3108, 2019.
[23]
L. Lei, H. J. Xu, X. Xiong, K. Zheng, W. Xiang, and X. B. Wang, Multi-user resource control with deep reinforcement learning in IoT edge computing, arXiv preprint arXiv: 1906.07860, 2019.
[24]
X. F. Chen, H. G. Zhang, C. Wu, S. W. Mao, Y. S. Ji, and M. Bennis, Optimized computation offloading performance in virtual edge computing systems via deep reinforcement learning, IEEE Internet Things J., vol. 6, no. 3, pp. 4005-4018, 2019.
[25]
L. Huang, S. Z. Bi, and Y. J. A. Zhang, Deep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networks, IEEE Trans. Mobile Comput., vol. 19, no. 311, pp. 2581-2593, 2020.
[26]
Y. Liu, H. M. Yu, S. L. Xie, and Y. Zhang, Deep reinforcement learning for offloading and resource allocation in vehicle edge computing and networks, IEEE Trans. Veh. Technol., vol. 68, no. 11, pp. 11 158-11 168, 2019.
[27]
H. Ye, G. Y. Li, and B. H. F. Juang, Deep reinforcement learning based resource allocation for V2V communications, IEEE Trans. Veh. Technol., vol. 68, no. 4, pp. 3163-3173, 2019.
[28]
R. P. Li, Z. F. Zhao, Q. Sun, C. L. I, C. Y. Yang, X. F. Chen, M. J. Zhao, H. G. Zhang, Deep reinforcement learning for resource management in network slicing, IEEE Access, vol. 6, pp. 74 429-74 441, 2018.
[29]
F. Meng, P. Chen, L. N. Wu, and J. L. Cheng, Senior, power allocation in multi-user cellular networks: Deep reinforcement learning approaches, IEEE Trans. Wirel. Commun., vol. 19, no. 10, pp. 6255-6267, 2020.
[30]
X. J. Li, J. Fang, W. Cheng, H. P. Duan, Z. Chen, and H. B. Li, Intelligent power control for spectrum sharing in cognitive radios: A deep reinforcement learning approach, IEEE Access, vol. 6, pp. 25 463-25 473, 2018.
[31]
X. M. He, K. Wang, H. W. Huang, T. Miyazaki, Y. X. Wang, and S. Guo, Green resource allocation based on deep reinforcement learning in content-centric IoT, IEEE Trans. Emerg. Top. Comput., vol. 8, no. 3, pp. 781-796, 2020.
[32]
A. Yousefpour, G. Ishigaki, R. Gour, and J. P. Jue, On reducing IoT service delay via fog offloading, IEEE Internet Things J., vol. 5, no. 2, pp. 998-1010, 2018.
[33]
R. S. Sutton and A. G. Barto. Introduction to Reinforcement Learning, 2nd ed. Cambridge, MA, USA: MIT Press, 1998.
[34]
H. Zhu, Y. Cao, X. Wei, W. Wang, T. Jiang, and S. Jin, Caching transient data for internet of things: A deep reinforcement learning approach, IEEE Internet Things J., vol. 6, no. 2, pp. 2074-2083, 2019.