Journal Home > Volume 10 , Issue 1

Generator tripping scheme (GTS) is the most commonly used scheme to prevent power systems from losing safety and stability. Usually, GTS is composed of offline predetermination and real-time scenario match. However, it is extremely time-consuming and labor-intensive for manual predetermination for a large-scale modern power system. To improve efficiency of predetermination, this paper proposes a framework of knowledge fusion-based deep reinforcement learning (KF-DRL) for intelligent predetermination of GTS. First, the Markov Decision Process (MDP) for GTS problem is formulated based on transient instability events. Then, linear action space is developed to reduce dimensionality of action space for multiple controllable generators. Especially, KF-DRL leverages domain knowledge about GTS to mask invalid actions during the decision-making process. This can enhance the efficiency and learning process. Moreover, the graph convolutional network (GCN) is introduced to the policy network for enhanced learning ability. Numerical simulation results obtained on New England power system demonstrate superiority of the proposed KF-DRL framework for GTS over the purely data-driven DRL method.


menu
Abstract
Full text
Outline
About this article

Intelligent Predetermination of Generator Tripping Scheme: Knowledge Fusion-based Deep Reinforcement Learning Framework

Show Author's information Lingkang Zeng1,2Wei Yao1( )Ze Hu1Hang Shuai3Zhouping Li1Jinyu Wen1Shijie Cheng1
State Key Laboratory of Advanced Electromagnetic Engineering and Technology, School of Electrical and Electronics Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
Dispatching and Control Center, Central China Branch of State Grid Corporation of China, Wuhan 430077, China
Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN 37996, USA

Abstract

Generator tripping scheme (GTS) is the most commonly used scheme to prevent power systems from losing safety and stability. Usually, GTS is composed of offline predetermination and real-time scenario match. However, it is extremely time-consuming and labor-intensive for manual predetermination for a large-scale modern power system. To improve efficiency of predetermination, this paper proposes a framework of knowledge fusion-based deep reinforcement learning (KF-DRL) for intelligent predetermination of GTS. First, the Markov Decision Process (MDP) for GTS problem is formulated based on transient instability events. Then, linear action space is developed to reduce dimensionality of action space for multiple controllable generators. Especially, KF-DRL leverages domain knowledge about GTS to mask invalid actions during the decision-making process. This can enhance the efficiency and learning process. Moreover, the graph convolutional network (GCN) is introduced to the policy network for enhanced learning ability. Numerical simulation results obtained on New England power system demonstrate superiority of the proposed KF-DRL framework for GTS over the purely data-driven DRL method.

Keywords: Deep reinforcement learning, graph convolutional network, generator tripping scheme, invalid action masking, knowledge fusion

References(40)

[1]

J. A. López and C. N. Lu, “Adaptable system integrity protection scheme considering renewable energy sources output variations,” IEEE Transactions on Power Systems, vol. 35, no. 5, pp. 3459–3469, Sep. 2020.

[2]

M. S. Ballal, A. R. Kulkarni, and H. M. Suryawanshi, “Methodology for the improvements in synchrophasor based System Integrity Protection Schemes under stressed conditions,” Sustainable Energy, Grids and Networks, vol. 26, pp. 100465, Jun. 2021.

[3]

P. M. Anderson and B. K. LeReverend, “Industry experience with special protection schemes,” IEEE Transactions on Power Systems, vol. 11, no. 3, pp. 1166–1179, Aug. 1996.

[4]

B. Sahoo and S. R. Samantaray, “System Integrity Protection Scheme for enhancing backup protection of transmission lines,” IEEE Systems Journal, vol. 15, no. 3, pp. 4578–4588, Sep. 2021.

[5]

H. Ota, Y. Kitayama, H. Ito, N. Fukushima, K. Omata, K. Morita, and Y. Kokai, “Development of transient stability control system (TSC system) based on on-line stability calculation,” IEEE Transactions on Power Systems, vol. 11, no. 3, pp. 1463–1472, Aug. 1996.

[6]

X. Xu, H. X. Zhang, C. G. Li, Y. T. Liu, W. Li, and V. Terzija, “Optimization of the event-driven emergency load-shedding considering transient security and stability constraints,” IEEE Transactions on Power Systems, vol. 32, no. 4, pp. 2581–2592, Jul. 2017.

[7]

F. Tian, X. Zhang, Z. H. Yu, W. J. Qiu, D. Y. Shi, J. Qiu, M. Liu, Y. L. Li, and X. X. Zhou, “Online decision-making and control of power system stability based on super-real-time simulation,” CSEE Journal of Power and Energy Systems, vol. 2, no. 1, pp. 95–103, Mar. 2016.

[8]

A. C. Adewole, R. Tzoneva, and A. Apostolov, “Adaptive under-voltage load shedding scheme for large interconnected smart grids based on wide area synchrophasor measurements,” IET Generation Transmission & Distribution, vol. 10, no. 8, pp. 1957–1968, May 2016.

[9]

Z. H. Li, G. Q. Yao, G. C. Geng, and Q. Y. Jiang, “An efficient optimal control method for open-loop transient stability emergency control,” IEEE Transactions on Power Systems, vol. 32, no. 4, pp. 2704–2713, Jul. 2017.

[10]

V. Madani, D. Novosel, S. Horowitz, M. Adamiak, J. Amantegui, D. Karlsson, S. Imai, and A. Apostolov, “IEEE PSRC report on global industry experiences with system integrity protection schemes (SIPS),” IEEE Transactions on Power Delivery, vol. 25, no. 4, pp. 2143–2155, Oct. 2010.

[11]

Z. W. Yao, V. R. Vinnakota, Q. Zhu, C. Nichols, G. Dwernychuk, and T. Inga-Rojas, “Forewarned is forearmed: An automated system for remedial action schemes,” IEEE Power and Energy Magazine, vol. 12, no. 3, pp. 77–86, May/Jun. 2014.

[12]

H. Khoshkhoo, S. Yari, A. Pouryekta, V. K. Ramachandaramurthy, and J. M. Guerrero, “A remedial action scheme to prevent mid/long-term voltage instabilities,” IEEE Systems Journal, vol. 15, no. 1, pp. 923–934, Mar. 2021.

[13]

D. Min, S. J. Kim, S. Seo, Y. H. Moon, K. Sun, J. H. Chow, and K. Hur, “Computing safety margins of a generation rejection scheme: A framework for online implementation,” IEEE Transactions on Smart Grid, vol. 9, no. 3, pp. 2337–2346, May 2018.

[14]

N. K. Rajalwal and D. Ghosh, “Recent trends in integrity protection of power system: A literature review,” International Transactions on Electrical Energy Systems, vol. 30, pp. e12523, May 2020.

[15]

M. K. Jena, S. R. Samantaray, and B. K. Panigrahi, “A new wide-area backup protection scheme for series-compensated transmission system” IEEE Systems Journal, vol. 11, no. 3, pp. 1877–1887, Sep. 2017.

[16]

D. H. Choi, S. H. Lee, Y. C. Kang, and J. W. Park, “Analysis on special protection scheme of Korea electric power system by fully utilizing STATCOM in a generation side,” IEEE Transactions on Power Systems, vol. 32, no. 3, pp. 1882–1890, May 2017.

[17]

E. Ghahremani, A. Heniche-Oussedik, M. Perron, M. Racine, S. Landry, and H. Akremi, “A detailed presentation of an innovative local and wide-area special protection scheme to avoid voltage collapse: From proof of concept to grid implementation,” IEEE Transactions on Smart Grid, vol. 10, no. 5, pp. 5196–5211, Sep. 2019.

[18]

S. H. Lee, W. C. Sung, J. H. Liu, Y. Y. Hong, Y. H. Hong, C. H. Wang, and C. C. Chu, “Applications of system integrity protection scheme and multi-phase reclosing of transmission lines for stability enhancement in Taiwan Power System,” IEEE Transactions on Industry Applications, vol. 57, no. 5, pp. 4548–4557, Sep./Oct. 2021.

[19]

Z. H. Li, G. C. Geng, and Q. Y. Jiang, “Transient stability emergency control using asynchronous parallel mixed-integer pattern search,” IEEE Transactions on Smart Grid, vol. 9, no. 4, pp. 2976–2985, Jul. 2018.

[20]

G. X. Gan, Z. X. Zhu, G. C. Geng, and Q. Y. Jiang, “An efficient parallel sequential approach for transient stability emergency control of large-scale power system,” IEEE Transactions on Power Systems, vol. 33, no. 6, pp. 5854–5864, Nov. 2018.

[21]

X. Chen, G. N. Qu, Y. J. Tang, S. Low, and N. Li, “Reinforcement learning for selective key applications in power systems: Recent advances and future challenges,” IEEE Transactions on Smart Grid, vol. 13, no. 4, pp. 2935–2958, Jul. 2022.

[22]

C. Huang, H. C. Zhang, L. Wang, X. Luo, and Y. H. Song, “Mixed deep reinforcement learning considering discrete-continuous hybrid action space for smart home energy management,” Journal of Modern Power Systems and Clean Energy, vol. 10, no. 3, pp. 743–754, May 2022.

[23]

J. Zhao, F. X. Li, S. Mukherjee, and C. Sticht, “Deep reinforcement learning-based model-free on-line dynamic multi-microgrid formation to enhance resilience,” IEEE Transactions on Smart Grid, vol. 13, no. 4, pp. 2557–2567, Jul. 2022.

[24]

Z. T. Zhang, J. Shi, W. W. Yang, Z. F. Song, Z. X. Chen, and D. Q. Lin, “Deep Reinforcement Learning Based Bi-layer Optimal Scheduling for Microgrids Considering Flexible Load Control,” CSEE Journal of Power and Energy Systems, vol. 9, no. 3, pp. 949–962, May 2023.

[25]

S. Y. Chen, J. J. Duan, Y. Y. Bai, J. Zhang, D. Shi, Z. W. Wang, X. Z. Dong, and Y. Z. Sun, “Active Power Correction Strategies Based on Deep Reinforcement Learning—Part Ⅱ: A Distributed Solution for Adaptability,” CSEE Journal of Power and Energy Systems, vol. 8, no. 4, pp. 1134–1144, Jul. 2022.

[26]

S. Y. Wang, J. J. Duan, D. Shi, C. L. Xu, H. F. Li, R. S. Diao, and Z. W. Wang, “A data-driven multi-agent autonomous voltage control framework using deep reinforcement learning,” IEEE Transactions on Power Systems, vol. 35, no. 6, pp. 4644–4654, Nov. 2020.

[27]

T. J. Wang, Y. Tang, “Automatic Voltage Control of Differential Power Grids Based on Transfer Learning and Deep Reinforcement Learning,” CSEE Journal of Power and Energy Systems, vol. 9, no. 3, pp. 937–948, May 2023.

[28]

J. W. Li, T. Yu, and X. S. Zhang, “Emergency fault affected wide-area automatic generation control via large-scale deep reinforcement learning,” Engineering Applications of Artificial Intelligence, vol. 106, pp. 104500, Nov. 2021.

[29]

H. T. Zeng, Y. Z. Zhou, Q. L. Guo, Z. M. Cai, and H. B. Sun, “Distributed Deep Reinforcement Learning-based Approach for Fast Preventive Control Considering Transient Stability Constraints,” CSEE Journal of Power and Energy Systems, vol. 9, no. 1, pp. 197–208, Jan. 2023.

[30]

Z. M. Yan and Y. Xu, “A multi-agent deep reinforcement learning method for cooperative load frequency control of a multi-area power system,” IEEE Transactions on Power Systems, vol. 35, no. 6, pp. 4599–4608, Nov. 2020.

[31]

Q. H. Huang, R. K. Huang, W. T. Hao, J. Tan, R. Fan, and Z. Y. Huang, “Adaptive power system emergency control using deep reinforcement learning,” IEEE Transactions on Smart Grid, vol. 11, no. 2, pp. 1171–1182, Mar. 2020.

[32]

C. Y. Chen, M. J. Cui, F. X. Li, S. F. Yin, and X. N. Wang, “Model-free emergency frequency control based on reinforcement learning,” IEEE Transactions on Industrial Informatics, vol. 17, no. 4, pp. 2336–2346, Apr. 2021.

[33]

D. Chen, K. A. Chen, Z. J. Li, T. S. Chu, R. Yao, F. Qiu and K. X. Li, “PowerNet: Multi-agent deep reinforcement learning for scalable powergrid control” IEEE Transactions on Power Systems, vol. 37, no. 2, pp. 1007–1017, Mar. 2022.

[34]

F. Yang, D. H. Huang, D. D. Li, S. F. Lin, S. M. Muyeen, and H. B. Zhai “Data-driven load frequency control based on multi-agent reinforcement learning with attention mechanism,” IEEE Transactions on Power Systems, vol. 38, no. 6, pp. 5560–5569, Nov. 2023, doi: 10.1109/TPWRS.2022.3223255.

[35]

W. Q. Cui, Y. Jiang, and B. S. Zhang, “Reinforcement learning for optimal primary frequency control: A lyapunov approach,” IEEE Transactions on Power Systems, vol. 38, no. 2, pp. 1676–1688, Mar. 2023.

[36]

H. Zhao, J. H. Zhao, J. Qiu, G. Q. Liang, and Z. Y. Dong, “Cooperative wind farm control with deep reinforcement learning and knowledge-assisted learning,” IEEE Transactions on Industrial Informatics, vol. 16, no. 11, pp. 6912–6921, Nov. 2020.

[37]

Z. P. Li, L. K. Zeng, W. Yao, Z. Hu, H. Shuai, Y. Tang, and J. Y. Wen “Intelligent emergency generator rejection schemes based on knowledge fusion and deep reinforcement learning,” Proceedings of the CSEE, to be published.

[38]
H. Van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double q-learning,” in Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2016, pp. 2094–2100.
DOI
[39]
A. Kanervisto, C. Scheller, and V. Hautamäki, “Action space shaping in deep reinforcement learning,” in 2020 IEEE Conference on Games (CoG), 2020, pp. 479–486, doi: 10.1109/CoG47356.2020.9231687.
DOI
[40]
S. Y. Huang and S. Ontañón. (2020, Jun.). A closer look at invalid action masking in policy gradient algorithms.[Online]. Available: https://www.researchgate.net/publication/342464961
Publication history
Copyright
Rights and permissions

Publication history

Received: 25 December 2022
Revised: 28 March 2023
Accepted: 19 May 2023
Published: 28 December 2023
Issue date: January 2024

Copyright

© 2022 CSEE.

Rights and permissions

This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Return