Journal Home > Volume 28 , Issue 3

Digital twinning and edge computing are attractive solutions to support computing-intensive and service-sensitive Internet of Vehicles applications. Most of the existing Internet of Vehicles service offloading solutions only consider edge–cloud collaboration, but the collaboration between small cell eNodeB (SCeNB) should not be ignored. Service delays far lower than offloading tasks to the cloud can be obtained through reasonable collaborative computing between nodes. The proposed framework realizes and maintains the simulation of collaboration between SCeNB nodes by constructing a digital twin that maintains SCeNB nodes in the central controller, thereby realizing user task offloading positions, sub-channel allocation, and computing resource allocation. Then an algorithm named AUC-AC is proposed, based on the dominant actor–critic network and the auction mechanism. In order to obtain a better command of global information, the convolutional block attention mechanism (CBAM) is used in the digital twin of each SCeNB node to observe its environment and learn strategies. Numerical results show that our experimental scheme is better than several baseline algorithms in terms of service delay.


menu
Abstract
Full text
Outline
About this article

Collaborative Offloading Method for Digital Twin Empowered Cloud Edge Computing on Internet of Vehicles

Show Author's information Linjie Gu1Mengmeng Cui2( )Linkun Xu3Xiaolong Xu2
Changwang School of Honors, Nanjing University of Information Science and Technology, Nanjing 210044, China
School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 21044, China
School of Artificial Intelligence, Nanjing University of Information and Technology, Nanjing 210044, China

Abstract

Digital twinning and edge computing are attractive solutions to support computing-intensive and service-sensitive Internet of Vehicles applications. Most of the existing Internet of Vehicles service offloading solutions only consider edge–cloud collaboration, but the collaboration between small cell eNodeB (SCeNB) should not be ignored. Service delays far lower than offloading tasks to the cloud can be obtained through reasonable collaborative computing between nodes. The proposed framework realizes and maintains the simulation of collaboration between SCeNB nodes by constructing a digital twin that maintains SCeNB nodes in the central controller, thereby realizing user task offloading positions, sub-channel allocation, and computing resource allocation. Then an algorithm named AUC-AC is proposed, based on the dominant actor–critic network and the auction mechanism. In order to obtain a better command of global information, the convolutional block attention mechanism (CBAM) is used in the digital twin of each SCeNB node to observe its environment and learn strategies. Numerical results show that our experimental scheme is better than several baseline algorithms in terms of service delay.

Keywords: reinforcement learning, digital twin, cloud-edge computing, actor-critic network

References(37)

[1]
M. Patel, D. Sabella, N. Sprecher, and V. Young, Contributor, Huawei, Vice Chair ETSI MEC ISG, Chair MEC IEGWorking Group, p. 16, 2015.
[2]
H. M. Song, H. R. Kim, and H. K. Kim, Intrusion detection system based on the analysis of time intervals of CAN messages for in-vehicle network, in Proc. 2016 Int. Conf. on Information Networking (ICOIN), Kota Kinabalu, Malaysia, 2016, pp. 63–68.
[3]
W. Y. Zhang, Z. J. Zhang, and H. C. Chao, Cooperative fog computing for dealing with big data in the internet of vehicles: Architecture and hierarchical resource management, IEEE Commun. Mag., vol. 55, no. 12, pp. 60–67, 2017.
[4]
X. L. He, Z. Y. Ren, C. H. Shi, and J. Fang, A novel load balancing strategy of software-defined cloud/fog networking in the Internet of Vehicles, China Commun., vol. 13, no. 2, pp. 140–149, 2016.
[5]
J. K. Ren, G. D. Yu, Y. H. He, and G. Y. Li, Collaborative cloud and edge computing for latency minimization, IEEE Trans. Veh. Technol., vol. 68, no. 5, pp. 5031–5044, 2019.
[6]
W. B. Fan, L. Yao, J. T. Han, F. Wu, and Y. A. Liu, Game-based multitype task offloading among mobile-edge-computing-enabled base stations, IEEE Internet Things J., vol. 8, no. 24, pp. 17691–17704, 2021.
[7]
W. Sun, H. B. Zhang, R. Wang, and Y. Zhang, Reducing offloading latency for digital twin edge networks in 6G, IEEE Trans. Veh. Technol., vol. 69, no. 10, pp. 12240–12251, 2020.
[8]
Y. L. Lu, X. H. Huang, K. Zhang, S. Maharjan, and Y. Zhang, Low-latency federated learning and blockchain for edge association in digital twin empowered 6G networks, IEEE Trans. Industr. Inform., vol. 17, no. 7, pp. 5098–5107, 2021.
[9]
C. Gehrmann and M. Gunnarsson, A digital twin based industrial automation and control system security architecture, IEEE Trans. Industr. Inform., vol. 16, no. 1, pp. 669–680, 2020.
[10]
K. R. Alasmari, R. C. Green II, and M. Alam, Mobile edge offloading using Markov decision processes, in Proc. 2nd Int. Conf. on Edge Computing - EDGE 2018, Seattle, WA, USA, 2018, pp. 80–90.
[11]
K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath, Deep reinforcement learning: A brief survey, IEEE Signal Process. Mag., vol. 34, no. 6, pp. 26–38, 2017.
[12]
X. L. Xu, B. W. Shen, S. Ding, G. Srivastava, M. Bilal, M. R. Khosravi, V. G. Menon, M. A. Jan, and M. L. Wang, Service offloading with deep Q-network for digital twinning-empowered internet of vehicles in edge computing, IEEE Trans. Industr. Inform., vol. 18, no. 2, pp. 1414–1423, 2022.
[13]
R. Dong, C. Y. She, W. Hardjawana, Y. H. Li, and B. Vucetic, Deep learning for hybrid 5G services in mobile edge computing systems: Learn from a digital twin, IEEE Trans. Wirel. Commun., vol. 18, no. 10, pp. 4692–4707, 2019.
[14]
Z. L. Cao, P. Zhou, R. X. Li, S. Q. Huang, and D. P. Wu, Multiagent deep reinforcement learning for joint multichannel access and task offloading of mobile-edge computing in industry 4.0, IEEE Internet Things J., vol. 7, no. 7, pp. 6201–6213, 2020.
[15]
H. F. Lu, C. H. Gu, F. Luo, W. C. Ding, S. Zheng, and Y. F. Shen, Optimization of task offloading strategy for mobile edge computing based on multi-agent deep reinforcement learning, IEEE Access, vol. 8, pp. 202573–202584, 2020.
[16]
S. Gronauer and K. Diepold, Multi-agent deep reinforcement learning: A survey, Artif. Intell. Rev., vol. 55, no. 2, pp. 895–943, 2022.
[17]
K. Zhang, J. Y. Cao, and Y. Zhang, Adaptive digital twin and multiagent deep reinforcement learning for vehicular edge computing and networks, IEEE Trans. Ind. Inform., vol. 18, no. 2, pp. 1405–1413, 2022.
[18]
T. Liu, L. Tang, W. L. Wang, Q. B. Chen, and X. P. Zeng, Digital-twin-assisted task offloading based on edge collaboration in the digital twin edge network, IEEE Internet Things J., vol. 9, no. 2, pp. 1427–1444, 2022.
[19]
X. Y. Huang, L. J. He, and W. Y. Zhang, Vehicle speed aware computing task offloading and resource allocation based on multi-agent reinforcement learning in a vehicular edge computing network, in Proc. 2020 IEEE Int. Conf. on Edge Computing (EDGE), Beijing, China, 2020, pp. 1–8.
[20]
X. L. Xu, Z. J. Fang, L. Y. Qi, W. C. Dou, Q. He, and Y. C. Duan, A deep reinforcement learning-based distributed service off loading method for edge computing empowered internet of vehicles, (in Chinese), Chin. J. Comput., vol. 44, no. 12, pp. 2382–2405, 2021.
[21]
X. Q. Zhang, H. J. Cheng, Z. Y. Yu, and N. Xiong, Design and analysis of an efficient multi-resource allocation system for cooperative computing in internet of things, IEEE Internet Things J., .
[22]
V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Harley, T. P. Lillicrap, D. Silver, and K. Kavukcuoglu, Asynchronous methods for deep reinforcement learning, in Proc. 33rd Int. Conf. on Machine Learning, New York City, NY, USA, 2016, pp. 1928–1937.
[23]
Z. Q. Zhu, S. Wan, P. Y. Fan, and K. B. Letaief, Federated multiagent actor-critic learning for age sensitive mobile-edge computing, IEEE Internet Things J., vol. 9, no. 2, pp. 1053–1067, 2022.
[24]
S. Munir, S. F. Abedin, D. H. Kim, N. H. Tran, Z. Han, and C. S. Hong, A multi-agent system toward the green edge computing with microgrid, in Proc. 2019 IEEE Global Communications Conf. (GLOBECOM), Waikoloa, HI, USA, 2019, pp. 1–7.
[25]
S. Munir, S. F. Abedin, N. H. Tran, Z. Han, E. N. Huh, and C. S. Hong, Risk-aware energy scheduling for edge computing with microgrid: A multi-agent deep reinforcement learning approach, IEEE Trans. Netw. Serv. Manag., vol. 18, no. 3, pp. 3476–3497, 2021.
[26]
T. L. Mai, H. P. Yao, Z. H. Xiong, S. Guo, and D. T. Niyato, Multi-agent actor-critic reinforcement learning based in-network load balance, in Proc. GLOBECOM 2020 – 2020 IEEE Global Communications Conf., Taipei, China, 2020, pp. 1–6.
[27]
H. Tian, X. L. Xu, T. Y. Lin, Y. Cheng, C. Qian, L. Ren, and M. Bilal, DIMA: Distributed cooperative microservice caching for internet of things in edge computing by deep reinforcement learning, World Wide Web, .
[28]
Q. H. Huang, X. L. Xu, and J. H. Chen, Learning-aided fine grained offloading for real-time applications in edge-cloud computing, Wirel. Netw., .
[29]
R. C. Xie, X. F. Lian, Q. M. Jia, T. Huang, and Y. J. Liu, Survey on computation offloading in mobile edge computing, (in Chinese), J. Commun., vol. 39, no. 11, pp. 138–155, 2018.
[30]
I. P. Chochliouros, I. Giannoulakis, T. Kourtis, M. Belesioti, E. Sfakianakis, A. S. Spiliopoulou, N. Bompetsis, E. Kafetzakis, L. Goratti, and A. Dardamanis, A model for an innovative 5G-oriented architecture, based on small cells coordination for multi-tenancy and edge services, in Proc. 12th IFIP WG 12.5 Int. Conf. and Workshops Artificial Intelligence Applications and Innovations, Thessaloniki, Greece, 2016, pp. 666–675.
[31]
I. Giannoulakis, E. Kafetzakis, I. Trajkovska, P. S. Khodashenas, I. Chochliouros, C. Costa, I. Neokosmidis, and P. Bliznakov, The emergence of operator-neutral small cells as a strong case for cloud computing at the mobile edge, Trans. Emerg. Telecommun. Technol., vol. 27, no. 9, pp. 1152–1159, 2016.
[32]
M. Morelli, C. C. J. Kuo, and M. O. Pun, Synchronization techniques for orthogonal frequency division multiple access (OFDMA): A tutorial review, Proc. IEEE, vol. 95, no. 7, pp. 1394–1427, 2007.
[33]
Y. L. Lu, S. Maharjan, and Y. Zhang, Adaptive edge association for wireless digital twin networks in 6G, IEEE Internet Things J., vol. 8, no. 22, pp. 16219–16230, 2021.
[34]
S. Woo, J. Park, J. Y. Lee, and I. S. Kweon, CBAM: Convolutional block attention module, in Proc. 15th European Conf. Computer Vision (ECCV), Munich, Germany, 2018, pp. 3–19.
[35]
R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction, IEEE Trans. Neural Netw., vol. 16, no. 1, pp. 285–286, 2005.
[36]
N. Zhang, N. Cheng, A. T. Gamage, K. Zhang, J. W. Mark, and X. M. Shen, Cloud assisted HetNets toward 5G wireless networks, IEEE Commun. Mag., vol. 53, no. 6, pp. 59–65, 2015.
[37]
J. Zhang and K. B. Letaief, Mobile edge intelligence and computing for the internet of vehicles, Proc. IEEE, vol. 108, no. 2, pp. 246–261, 2020.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 18 October 2021
Revised: 21 January 2022
Accepted: 01 March 2022
Published: 13 December 2022
Issue date: June 2023

Copyright

© The author(s) 2023.

Acknowledgements

This research was supported by the Natural Science Foundation of Jiangsu Province of China (No. BK20211284), the Financial and Science Technology Plan Project of Xinjiang Production and Construction Corps (No. 2020DB005), the National Natural Science Foundation of China (No. 61872219), and NUIST Students’ Platform for Innovation and Entrepreneurship Training Program (No. 202110300569).

Rights and permissions

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return