Journal Home > Volume 8 , Issue 4

In viewing the power grid for large-scale new energy integration and power electrification of power grid equipment, the impact of power system faults is increased, and the ability of anti-disturbance is decreased, which makes the power system fault clearance more difficult. In this paper, a load shedding control strategy based on artificial intelligence is proposed, this action strategy of load shedding, which is selected by deep reinforcement learning, can support autonomous voltage control. First, the power system operation data is used as the basic data to construct the network training dataset, and then a novel reward function for voltage is established. This value function, which conforms to the power grid operation characteristics, will act as the reward value for deep reinforcement learning, and the Deep Deterministic Policy Gradient algorithm (DDPG) algorithm, with the continuous action strategy, will be adopted. Finally, the deep reinforcement learning network is continuously trained, and the load shedding strategy concerning the grid voltage control problem will be obtained in the power system emergency control situation, and this strategy action is input into the Pypower module for simulation verification, thereby realizing the joint drive of data and model. According to the numerical simulation analysis, it shows that this method can effectively determine the accurate action selection of load shedding, and improve the stable operational ability of the power system.


menu
Abstract
Full text
Outline
About this article

Load Shedding Control Strategy in Power Grid Emergency State Based on Deep Reinforcement Learning

Show Author's information Jian Li( )Sheng ChenXinying WangTianjiao Pu
China Electric Power Research Institute, Haidian District, Beijing 100192, China

Abstract

In viewing the power grid for large-scale new energy integration and power electrification of power grid equipment, the impact of power system faults is increased, and the ability of anti-disturbance is decreased, which makes the power system fault clearance more difficult. In this paper, a load shedding control strategy based on artificial intelligence is proposed, this action strategy of load shedding, which is selected by deep reinforcement learning, can support autonomous voltage control. First, the power system operation data is used as the basic data to construct the network training dataset, and then a novel reward function for voltage is established. This value function, which conforms to the power grid operation characteristics, will act as the reward value for deep reinforcement learning, and the Deep Deterministic Policy Gradient algorithm (DDPG) algorithm, with the continuous action strategy, will be adopted. Finally, the deep reinforcement learning network is continuously trained, and the load shedding strategy concerning the grid voltage control problem will be obtained in the power system emergency control situation, and this strategy action is input into the Pypower module for simulation verification, thereby realizing the joint drive of data and model. According to the numerical simulation analysis, it shows that this method can effectively determine the accurate action selection of load shedding, and improve the stable operational ability of the power system.

Keywords: Artificial intelligence, deep reinforcement learning, load shedding control, power system emergency state

References(21)

[1]
D. X. Zhang, X. Q. Han, and C. Y. Deng, “Review on the research and practice of deep learning and reinforcement learning in smart grids,” CSEE Journal of Power and Energy Systems, vol. 4, no. 3, pp. 362–370, Sep. 2018.
[2]
Z. D. Zhang, D. X. Zhang, and R. C. Qiu, “Deep reinforcement learning for power system applications: an overview,” CSEE Journal of Power and Energy Systems, vol. 6, no. 1, pp. 213–225, Mar. 2020.
[3]
M. Glavic, R. Fonteneau, and D. Ernst, “Reinforcement learning for electric power system decision and control: past considerations and perspectives,” IFAC-PapersOnLine, vol. 50, no. 1, pp. 6918–6927, Jul. 2017.
[4]
Z. Ni and S. Paul, “A multistage game in smart grid security: a reinforcement learning solution,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 9, pp. 2684–2695, Sept. 2019.
[5]
C. Druet, D. Ernst, and L. Wehenkel, “Application of reinforcement learning to electrical power system closed-loop emergency control,” in Principles of Data Mining and Knowledge Discovery, D. A. Zighed, J. Komorowski, and J. Żytkow, Eds. Berlin, Heidelberg: Springer, pp. 86–95, 2000.
DOI
[6]
Y. Chen, S. W. Huang, F. Liu, Z. S. Wang, and X. W. Sun, “Evaluation of reinforcement learning-based false data injection attack to automatic voltage control,” IEEE Transactions on Smart Grid, vol. 10, no. 2, pp. 2158–2169, Mar. 2019.
[7]
B. J. Claessens, P. Vrancx, and F. Ruelens, “Convolutional neural networks for automatic state-time feature extraction in reinforcement learning applied to residential load control,” IEEE Transactions on Smart Grid, vol. 9, no. 4, pp. 3259–3269, Jul. 2018.
[8]
R. Rocchetta, L. Bellani, M. Compare, E. Zio, and E. Patelli, “A reinforcement learning framework for optimal operation and maintenance of power grids,” Applied Energy, vol. 241, pp. 291–301, May 2019.
[9]
X. S. Zhang, Q. Li, T. Yu, and B. Yang, “Consensus transfer Q-learning for decentralized generation command dispatch based on virtual generation Tribe,” IEEE Transactions on Smart Grid, vol. 9, no. 3, pp. 2152–2165, May 2018.
[10]
L. Xi, J. F. Chen, Y. H. Huang, Y. C. Xu, L. Liu, Y. M. Zhou, and Y. D. Li, “Smart generation control based on multi-agent reinforcement learning with the idea of the time tunnel,” Energy, vol. 153, pp. 977–987, Jun. 2018.
[11]
J. Jung, C. C. Liu, S. L. Tanimoto, and V. Vittal, “Adaptation in load shedding under vulnerable operating conditions,” IEEE Transactions on Power Systems, vol. 17, no. 4, pp. 1199–1205, Nov. 2002.
[12]
W. Liu, D. X. Zhang, X. Y. Wang, J. X. Hou, and L. P. Liu, “A decision making strategy for generating unit tripping under emergency circumstances based on deep reinforcement learning,” Proceedings of the CSEE, vol. 38, no. 1, pp. 109–119, Jan. 2018.
[13]
Q. L. Yang, G. Wang, A. Sadeghi, G. B. Giannakis, and J. Sun, “Two-timescale voltage control in distribution grids using deep reinforcement learning,” IEEE Transactions on Smart Grid, vol. 11, no. 3, pp. 2313–2323, May 2020.
[14]
L. F. Yin, T. Yu, and L. Zhou, “Design of a novel smart generation controller based on deep q learning for large-scale interconnected power system,” Journal of Energy Engineering, vol. 144, no. 3, pp. 04018033, Jun. 2018.
[15]
Q. H. Huang, R. K. Huang, W. T. Hao, J. Tan, R. Fan, and Z. Y. Huang, “Adaptive power system emergency control using deep reinforcement learning,” IEEE Transactions on Smart Grid, vol. 11, no. 2, pp. 1171–1182, Mar. 2020.
[16]
R. S. Diao, Z. W. Wang, D. Shi, Q. Y. Chang, J. J. Duan and X. H. Zhang, “Autonomous voltage control for grid operation using deep reinforcement learning,” in 2019 IEEE Power & Energy Society General Meeting, Atlanta, 2019, pp. 1–5.
[17]
Z. M. Yan and Y. Xu, “Data-driven load frequency control for stochastic power systems: A deep reinforcement learning method with continuous action search,”IEEE Transactions on Power Systems, vol. 34, no. 2, pp. 1653–1656, Mar. 2019.
[18]
J. J. Duan, D. Shi, R. S. Diao, H. F. Li, Z. W. Wang, B. Zhang, D. S. Bian, and Z. H. Yi, “Deep-reinforcement-learning-based autonomous voltage control for power grid operations,” IEEE Transactions on Power System, vol. 35, no. 1, pp. 814–817, Jan. 2020.
[19]
J. R. Vázquez-Canteli and Z. Nagy, “Reinforcement learning for demand response: A review of algorithms and modeling techniques,” Applied Energy, vol. 235, pp. 1072–1089, Feb. 2019.
[20]
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis, “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, Feb. 2015.
[21]
R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction. Cambridge: MIT Press, 1998.
DOI
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 17 November 2020
Revised: 28 January 2021
Accepted: 08 April 2021
Published: 10 September 2021
Issue date: July 2022

Copyright

© 2020 CSEE

Acknowledgements

Acknowledgements

This work was supported by the Science and Technology Project of SGCC (5100-202055298A-0-0-00).

Rights and permissions

Return