Journal Home > Volume 8 , Issue 1

The strong stochastic disturbance caused by large-scale distributed energy access to power grids affects the security, stability and economic operations of the power grid. A novel multiple-step greedy policy based on the consensus Q-learning (MSGP-CQ) strategy is proposed in this paper, which is an automatic generation control (AGC) for distributed energy incorporating multiple-step greedy attribute and multiple-level allocation strategy. The convergence speed and learning efficiency in the MSGP algorithm are accelerated through the predictive multiple-step iteration updating in the proposed strategy, and the CQ algorithm is adopted with collaborative consensus and self-learning characteristics to enhance the adaptability of the power allocation strategy under the strong stochastic disturbances and obtain the total power commands in the power grid and the dynamic optimal allocations of the unit power. The simulations of the improved IEEE two-area load-frequency control (LFC) power system and the interconnected system model of intelligent distribution network (IDN) groups incorporating a large amount of distributed energy show that the proposed strategy can achieve the optimal coordinated control and power allocation in the power grid. The algorithm MSGP-CQ has stronger robustness and faster dynamic optimization speed and can reduce generation costs. Meanwhile it can also solve the strong stochastic disturbance caused by large-scale distributed energy access to the grid compared with some existing intelligent algorithms.


menu
Abstract
Full text
Outline
About this article

Automatic Generation Control Based on Multiple-step Greedy Attribute and Multiple-level Allocation Strategy

Show Author's information Lei Xi ( )Le ZhangYanchun XuShouxiang WangChao Yang
College of Electrical Engineering and New Energy, and the Yichang Key Laboratory of Defense and Control of Cyber-Physical Systems, China Three Gorges University, Yichang 443002, China
College of Electrical Engineering and New Energy, China Three Gorges University, Yichang 443002, China
Key Laboratory of Smart Grid of Ministry of Education, Tianjin University, Tianjin 300072, China

Abstract

The strong stochastic disturbance caused by large-scale distributed energy access to power grids affects the security, stability and economic operations of the power grid. A novel multiple-step greedy policy based on the consensus Q-learning (MSGP-CQ) strategy is proposed in this paper, which is an automatic generation control (AGC) for distributed energy incorporating multiple-step greedy attribute and multiple-level allocation strategy. The convergence speed and learning efficiency in the MSGP algorithm are accelerated through the predictive multiple-step iteration updating in the proposed strategy, and the CQ algorithm is adopted with collaborative consensus and self-learning characteristics to enhance the adaptability of the power allocation strategy under the strong stochastic disturbances and obtain the total power commands in the power grid and the dynamic optimal allocations of the unit power. The simulations of the improved IEEE two-area load-frequency control (LFC) power system and the interconnected system model of intelligent distribution network (IDN) groups incorporating a large amount of distributed energy show that the proposed strategy can achieve the optimal coordinated control and power allocation in the power grid. The algorithm MSGP-CQ has stronger robustness and faster dynamic optimization speed and can reduce generation costs. Meanwhile it can also solve the strong stochastic disturbance caused by large-scale distributed energy access to the grid compared with some existing intelligent algorithms.

Keywords: Automatic generation control, collaborative consensus, multiple-step greedy attribute, multiple-level allocation

References(33)

[1]
D. Xu, B. Zhou, Q. W. Wu, C. Y. Chung, C. B. Li, S. Huang, and S. Chen, “Integrated modelling and enhanced utilization of power-to-ammonia for high renewable penetrated multi-energy systems,”IEEE Transactions on Power Systems, vol. 35, no. 6, pp. 4769–4780, Nov. 2020.
[2]
H. Lund, “Large-scale integration of wind power into different energy systems,”Energy, vol. 30, no. 13, pp. 2402–2412, Oct. 2005.
[3]
S. C. Wang, “Current status of PV in China and its future forecast,”CSEE Journal of Power and Energy Systems, vol. 6, no. 1, pp. 72–82, Mar. 2020.
[4]
J. C. Mukherjee and A. Gupta, “Distributed charge scheduling of Plug-In electric vehicles using Inter-Aggregator collaboration,”IEEE Transactions on Smart Grid, vol. 8, no. 1, pp. 331–341, Jan. 2017.
[5]
H. Z. Wang, Z. X. Lei, X. Zhang, B. Zhou, and J. C. Peng, “A review of deep learning for renewable energy forecasting,”Energy Conversion and Management, vol. 198, pp. 111799, Oct. 2019.
[6]
D. Xu, Q. W. Wu, B. Zhou, C. B. Li, L. Bai, and S. Huang, “Distributed multi-energy operation of coupled electricity, heating, and natural gas networks,”IEEE Transactions on Sustainable Energy, vol. 11, no. 4, pp. 2457–2469, Oct. 2020.
[7]
Q. Y. Sun, N. Zhang, S. You, and J. W. Wang, “The dual control with consideration of security operation and economic efficiency for energy hub,”IEEE Transactions on Smart Grid, vol. 10, no. 6, pp. 5930–5941, Nov. 2019.
[8]
M. Yazdanian and A. Mehrizi-Sani, “Distributed control techniques in microgrids,”IEEE Transactions on Smart Grid, vol. 5, no. 6, pp. 2901–2909, Nov. 2014.
[9]
W. Yan, R. F. Zhao, X. Zhao, C. Wang, and J. Yu, “Review on control strategies in automatic generation control,”Power System Protection and Control, vol. 41, no. 8, pp. 149–155, Apr. 2013.
[10]
C. X. Mu, Y. F. Tang, and H. B. He, “Improved sliding mode design for load frequency control of power system integrated an adaptive learning strategy,”IEEE Transactions on Industrial Electronics, vol. 64, no. 8, pp. 6724–6751, Aug. 2017.
[11]
B. P. Padhy and B. Tyagi, “Artificial neural network based multi area Automatic Generation Control scheme for a competitive electricity market environment,”in 2009 International Conference on Power Systems, Kharagpur, India, 2009.
[12]
T. Yu, H. Z. Wang, B. Zhou, K. W. Chen, and J. Tang, “Multi-agent correlated equilibrium Q(λ) learning for coordinated smart generation control of interconnected power grids,”IEEE Transactions on Power Systems, vol. 30, no. 4, pp. 1669–1679, Jul. 2015.
[13]
L. Xi, T. Yu, B. Yang, and X. S. Zhang, “A novel multi-agent decentralized win or learn fast policy hill-climbing with eligibility trace algorithm for smart generation control of interconnected complex power grids,”Energy Conversion and Management, vol. 103, pp. 82–93, Oct. 2015.
[14]
L. Xi, T. Yu, B. Yang, X. S. Zhang, and X. Y. Qiu, “A wolf pack hunting strategy based virtual tribes control for automatic generation control of smart grid,”Applied Energy, vol. 178, pp. 198–211, Sep. 2016.
[15]
Z. D. Zhang, D. X. Zhang, and R. C. Qiu, “Deep reinforcement learning for power system applications: An overview,”CSEE Journal of Power and Energy Systems, vol. 6, no. 1, pp. 213–225, Mar. 2020.
[16]
L. Xi, L. Yu, Y. C. Xu, S. X. Wang, and X. Chen, “A novel multi-agent DDQN-AD method-based distributed strategy for automatic generation control of integrated energy systems,”IEEE Transactions on Sustainable Energy, vol. 11, no. 4, pp. 2417–2426, Oct. 2020.
[17]
L. Xi, J. F. Chen, Y. H. Huang, Y. C. Xu, L. Liu, Y. M. Zhou, and Y. D. Li, “Smart generation control based on multi-agent reinforcement learning with the idea of the time tunnel,”Energy, vol. 153, pp. 977–987, Jun. 2018.
[18]
T. Yu, Y. M. Wang, W. J. Ye, B. Zhou, and K. W. Chen, “Stochastic optimal generation command dispatch based on improved hierarchical reinforcement learning approach,”IET Generation Transmission & Distribution, vol. 5, no. 8, pp. 789–797, Aug. 2011.
[19]
X. S. Zhang, Q. Li, T. Yu, and B. Yang, “Consensus transfer Q-learning for decentralized generation command dispatch based on virtual generation tribe,”IEEE Transactions on Smart Grid, vol. 9, no. 3, pp. 2152–2165, May 2016.
[20]
D. X. Zhang, X. Q. Han, and C. Y. Deng, “Review on the research and practice of deep learning and reinforcement learning in smart grids,”CSEE Journal of Power and Energy Systems, vol. 4, no. 3, pp. 362–370, Sep. 2018.
[21]
Y. Efroni, G. Dalal, B. Scherrer, and S Mannor, “Multiple-step greedy policies in online and approximate reinforcement learning,”in 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada, 2018.
[22]
L. Xi, Y. D. Li, Y. H. Huang, L. Lu, and J. F. Chen, “A novel automatic generation control method based on the ecological population cooperative control for the islanded smart grid,”Complexity, vol. 2018, pp. 2456963, Aug. 2018.
[23]
C. J. C. H. Watkins and P. Dayan, “Q-learning,”Machine Learning, vol. 8, no. 3–4, pp. 279–292, May 1992.
[24]
R. S. Sutton and A. G. Barto, “Reinforcement learning: an introduction,”IEEE Transactions on Neural Networks, vol. 9, no. 5, pp. 1054, Sep. 1998.
[25]
Q. Y. Sun, R. Y. Fan, Y. S. Li, B. N. Huang, and D. Z. Ma, “A distributed double-consensus algorithm for residential WE-energy,”IEEE Transactions on Industrial Informatics, vol. 15, no. 8, pp. 4830–4842, Aug. 2019.
[26]
G. Merlet, T. Nowak, H. Schneider, and S. Sergeev, “Generalizations of bounds on the index of convergence to weighted digraphs,”Discrete Applied Mathematics, vol. 178, pp. 121–134, Dec. 2014.
[27]
L. H. Ji, Y. Tang, Q. Liu, and X. F. Liao, “On adaptive pinning consensus for dynamic multi-agent networks with general connected topology,”in 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 2016.
[28]
L. Sun, F. X. Yao, S. C. Chai, and Y. G, Xu, “Leader-following consensus for high-order multi-agent systems with measurement noises,”in 2016 8th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), Hangzhou, China, 2016.
[29]
L. Xi, J. N. Wu, Y. C. Xu, and H. B. Sun, “Automatic generation control based on multiple neural networks with Actor-Critic strategy,”IEEE Transactions on Neural Networks and Learning Systems, 2020, .
[30]
G. Ray, A. N. Prasad, and G. D. Prasad, “A new approach to the design of robust load-frequency controller for large scale power systems,”Electric Power Systems Research, vol. 51, no. 1, pp. 13–22, Jul. 1999.
[31]
L. Xi, L. Zhang, J. C. Liu, Y. D. Li, X. Chen, L. Q. Yang, and S. X. Wang, “A virtual generation ecosystem control strategy for automatic generation control of interconnected microgrids,”IEEE Access, vol. 8, pp. 94165–94175, May 2020.
[32]
A. K. Saha, S. Chowdhury, S. P. Chowdhury, P. A. Crossley, “Modelling and simulation of microturbine in islanded and grid-connected mode as distributed energy resource,”in 2008 IEEE Power and Energy Society General Meeting-conversion and Delivery of Electrical Energy in the 21st Century, Pittsburgh, PA, USA, 2008.
[33]
L. Xi, L. Zhou, L. Liu, et al., “A deep reinforcement learning algorithm for the power order optimization allocation of AGC in interconnected power grids,”CSEE Journal of Power and Energy Systems, vol. 6, no. 3, pp. 712–723, 2020.
Publication history
Copyright
Acknowledgements
Rights and permissions

Publication history

Received: 19 June 2020
Revised: 12 September 2020
Accepted: 25 October 2020
Published: 20 November 2020
Issue date: January 2022

Copyright

© 2020 CSEE

Acknowledgements

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (No. 51707102).

Rights and permissions

Return