Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Edge computing nodes undertake an increasing number of tasks with the rise of business density. Therefore, how to efficiently allocate large-scale and dynamic workloads to edge computing resources has become a critical challenge. This study proposes an edge task scheduling approach based on an improved Double Deep Q Network (DQN), which is adopted to separate the calculations of target Q values and the selection of the action in two networks. A new reward function is designed, and a control unit is added to the experience replay unit of the agent. The management of experience data are also modified to fully utilize its value and improve learning efficiency. Reinforcement learning agents usually learn from an ignorant state, which is inefficient. As such, this study proposes a novel particle swarm optimization algorithm with an improved fitness function, which can generate optimal solutions for task scheduling. These optimized solutions are provided for the agent to pre-train network parameters to obtain a better cognition level. The proposed algorithm is compared with six other methods in simulation experiments. Results show that the proposed algorithm outperforms other benchmark methods regarding makespan.
X. Xu, H. Li, W. Xu, Z. Liu, L. Yao, and F. Dai, Artificial intelligence for edge service optimization in Internet of vehicles: A survey, Tsinghua Science and Technology, vol. 27, no. 2, pp. 270–287, 2021.
M. Laroui, B. Nour, H. Moungla, M. A. Cherif, H. Afifi, and M. Guizani, Edge and fog computing for IoT: A survey on current research activities & future directions, Comput. Commun., vol. 180, pp. 210–231, 2021.
X. Xu, H. Tian, X. Zhang, L. Qi, Q. He, and W. Dou, DisCOV: Distributed COVID-19 detection on X-ray images with edge-cloud collaboration, IEEE Trans. Serv. Comput., vol. 15, no. 3, pp. 1206–1219, 2022.
S. B. Slama, Prosumer in smart grids based on intelligent edge computing: A review on artificial intelligence scheduling techniques, Ain Shams Eng. J., vol. 13, no. 1, p. 101504, 2022.
M. S. U. Islam, A. Kumar, and Y. C. Hu, Context-aware scheduling in fog computing: A survey, taxonomy, challenges and future directions, J. Netw. Comput. Appl., vol. 180, p. 103008, 2021.
X. Xu, Q. Jiang, P. Zhang, X. Cao, M. R. Khosravi, L. T. Alex, L. Qi, and W. Dou, Game theory for distributed IoV task offloading with fuzzy neural network in edge computing, IEEE Trans. Fuzzy Syst., vol. 30, no. 11, pp. 4593–4604, 2022.
A. Jayanetti, S. Halgamuge, and R. Buyya, Deep reinforcement learning for energy and time optimized scheduling of precedence-constrained tasks in edge-cloud computing environments, Future Gener. Comput. Syst., vol. 137, pp. 14–30, 2022.
S. Vemireddy and R. R. Rout, Fuzzy reinforcement learning for energy efficient task offloading in vehicular fog computing, Comput. Netw., vol. 199, p. 108463, 2021.
Z. Tang, W. Jia, X. Zhou, W. Yang, and Y. You, Representation and reinforcement learning for task scheduling in edge computing, IEEE Trans. Big Data, vol. 8, no. 3, pp. 795–808, 2022.
H. Tian, X. Xu, L. Qi, X. Zhang, W. Dou, S. Yu, and Q. Ni, CoPace: Edge computation offloading and caching for self-driving with deep reinforcement learning, IEEE Trans. Veh. Technol., vol. 70, no. 12, pp. 13281–13293, 2021.
H. Che, Z. Bai, R. Zuo, and H. Li, A deep reinforcement learning approach to the optimization of data center task scheduling, Complexity, vol. 2020, p. 3046769, 2020.
H. Xu, J. Zhou, W. Wei, and B. Cheng, Multiuser computation offloading for long-term sequential tasks in mobile edge computing environments, Tsinghua Science and Technology, vol. 28, no. 1, pp. 93–104, 2022.
Q. Liu, X. Wu, X. Liu, Y. Zhang, and Y. Hu, Near-data prediction based speculative optimization in a distribution environment, Mob. Netw. Appl., vol. 27, no. 6, pp. 2339–2347, 2022.
W. Qi, Optimization of cloud computing task execution time and user QoS utility by improved particle swarm optimization, Microprocess. Microsyst., vol. 80, p. 103529, 2021.
X. Zhao, G. Huang, L. Gao, M. Li, and Q. Gao, Low load DIDS task scheduling based on Q-learning in edge computing environment, J. Netw. Comput. Appl., vol. 188, p. 103095, 2021.
T. Dong, F. Xue, C. Xiao, and J. Li, Task scheduling based on deep reinforcement learning in a cloud manufacturing environment, Concurr. Comput. Pract. Exp., vol. 32, no. 11, p. e5654, 2020.
P. Gazori, D. Rahbari, and M. Nickray, Saving time and cost on the scheduling of fog-based IoT applications using deep reinforcement learning approach, Future Gener. Comput. Syst., vol. 110, pp. 1098–1115, 2020.
Q. Zhang, M. Lin, L. T. Yang, Z. Chen, S. U. Khan, and P. Li, A double deep Q-learning model for energy-efficient edge scheduling, IEEE Trans. Serv. Comput., vol. 12, no. 5, pp. 739–749, 2019.
S. Swarup, E. M. Shakshuki, and A. Yasar, Energy efficient task scheduling in fog environment using deep reinforcement learning approach, Procedia Comput. Sci., vol. 191, pp. 65–75, 2021.
506
Views
63
Downloads
17
Crossref
20
Web of Science
21
Scopus
0
CSCD
Altmetrics
The articles published in this open access journal are distributed under the terms of theCreative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).