Journal Home > Volume 9 , Issue 3

In terms of model-free voltage control methods, when the device or topology of the system changes, the model’s accuracy often decreases, so an adaptive model is needed to coordinate the changes of input. To overcome the defects of a model-free control method, this paper proposes an automatic voltage control (AVC) method for differential power grids based on transfer learning and deep reinforcement learning. First, when constructing the Markov game of AVC, both the magnitude and number of voltage deviations are taken into account in the reward. Then, an AVC method based on constrained multi-agent deep reinforcement learning (DRL) is developed. To further improve learning efficiency, domain knowledge is used to reduce action space. Next, distribution adaptation transfer learning is introduced for the AVC transfer circumstance of systems with the same structure but distinct topological relations/parameters, which can perform well without any further training even if the structure changes. Moreover, for the AVC transfer circumstance of various power grids, parameter-based transfer learning is created, which enhances the target system’s training speed and effect. Finally, the method’s efficacy is tested using two IEEE systems and two real-world power grids.


menu
Abstract
Full text
Outline
About this article

Automatic Voltage Control of Differential Power Grids Based on Transfer Learning and Deep Reinforcement Learning

Show Author's information Tianjing Wang1,2( )Yong Tang1
Laboratory of Power Grid Safety and Energy Conservation, China Electric Power Research Institute, Beijing 100192, China
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore

Abstract

In terms of model-free voltage control methods, when the device or topology of the system changes, the model’s accuracy often decreases, so an adaptive model is needed to coordinate the changes of input. To overcome the defects of a model-free control method, this paper proposes an automatic voltage control (AVC) method for differential power grids based on transfer learning and deep reinforcement learning. First, when constructing the Markov game of AVC, both the magnitude and number of voltage deviations are taken into account in the reward. Then, an AVC method based on constrained multi-agent deep reinforcement learning (DRL) is developed. To further improve learning efficiency, domain knowledge is used to reduce action space. Next, distribution adaptation transfer learning is introduced for the AVC transfer circumstance of systems with the same structure but distinct topological relations/parameters, which can perform well without any further training even if the structure changes. Moreover, for the AVC transfer circumstance of various power grids, parameter-based transfer learning is created, which enhances the target system’s training speed and effect. Finally, the method’s efficacy is tested using two IEEE systems and two real-world power grids.

Keywords: Deep reinforcement learning, transfer, voltage control, differential power grids

References(31)

[1]

C. Ren, Y. Xu, J. Zhao, R. Zhang and T. Wan, “A super-resolution perception-based incremental learning approach for power system voltage stability assessment with incomplete PMU measurements,” CSEE Journal of Power and Energy Systems, vol. 8, no. 1, pp. 76–85, Jan. 2022

[2]

X. Wang, J. Zhang, M. Zheng and L. Ma, “A distributed reactive power sharing approach in microgrids with improved droop control,” CSEE Journal of Power and Energy Systems, vol. 7, no. 6, pp. 1238–1246, Nov. 2021.

[3]

H. J. Li, F. X. Li, Y. Xu, D. T. Rizy, and S. Adhikari, “Autonomous and adaptive voltage control using multiple distributed energy resources,” IEEE Transactions on Power Systems, vol. 28, no. 2, pp. 718–730, May 2013.

[4]

H. Ahmadi, J. R. Martí, and H. W. Dommel, “A framework for Volt-VAR optimization in distribution systems,” IEEE Transactions on Smart Grid, vol. 6, no. 3, pp. 1473–1483, May 2015.

[5]

M. H. K. Tushar and C. Assi, “Volt-VAR control through joint optimization of capacitor bank switching, renewable energy, and home appliances,” IEEE Transactions on Smart Grid, vol. 9, no. 5, pp. 4077–4086, Sep. 2018.

[6]

M. B. Liu, C. A. Canizares, and W. Huang, “Reactive power and voltage control in distribution systems with limited switching operations,” IEEE Transactions on Power Systems, vol. 24, no. 2, pp. 889–899, May 2009.

[7]

Y. Xu, Z. Y. Dong, R. Zhang, and D. J. Hill, “Multi-timescale coordinated Voltage/VAR control of high renewable-penetrated distribution systems,” IEEE Transactions on Power Systems, vol. 32, no. 6, pp. 4398–4408, Nov. 2017.

[8]

Z. Y. Wang, J. H. Wang, B. K. Chen, M. M. Begovic, and Y. Y. He, “MPC-based Voltage/VAR optimization for distribution circuits with distributed generators and exponential load models,” IEEE Transactions on Smart Grid, vol. 5, no. 5, pp. 2412–2420, Sep. 2014.

[9]

H. J. Liu, W. Shi, and H. Zhu, “Distributed voltage control in distribution networks: Online and robust implementations,” IEEE Transactions on Smart Grid, vol. 9, no. 6, pp. 6106–6117, Nov. 2018.

[10]

L. Yu, D. Czarkowski, and F. de Leon, “Optimal distributed voltage regulation for secondary networks with DGs,” IEEE Transactions on Smart Grid, vol. 3, no. 2, pp. 959–967, Jun. 2012.

[11]

S. Y. Wang, J. J. Duan, D. Shi, C. L. Xu, H. F. Li, R. S. Diao, and Z. W. Wang, “A data-driven multi-agent autonomous voltage control framework using deep reinforcement learning,” IEEE Transactions on Power Systems, vol. 35, no. 6, pp. 4644–4654, Nov. 2020.

[12]

J. G. Vlachogiannis and N. D. Hatziargyriou, “Reinforcement learning for reactive power control,” IEEE Transactions on Power Systems, vol. 19, no. 3, pp. 1317–1325, Aug. 2004.

[13]

J. J. Duan, D. Shi, R. S. Diao, H. F. Li, Z. W. Wang, B. Zhang, D. S. Bian, and Z. H. Yi, “Deep-reinforcement-learning-based autonomous voltage control for power grid operations,” IEEE Transactions on Power Systems, vol. 35, no. 1, pp. 814–817, Jan. 2020.

[14]

W. Wang, N. P. Yu, Y. Q. Gao, and J. Shi, “Safe off-policy deep reinforcement learning algorithm for Volt-VAR control in power distribution systems,” IEEE Transactions on Smart Grid, vol. 11, no. 4, pp. 3008–3018, Jul. 2020.

[15]

Q. L. Yang, G. Wang, A. Sadeghi, G. B. Giannakis, and J. Sun, “Two-timescale voltage control in distribution grids using deep reinforcement learning,” IEEE Transactions on Smart Grid, vol. 11, no. 3, pp. 2313–2323, May 2020.

[16]
J. Y. Ren, J. F. Chen, B. Y. Li, M. Zhao, H. C. Shi, and H. You, “A method for power system transient stability assessment based on transfer learning,” in 2020 IEEE Power & Energy Society General Meeting, 2020, pp. 1–5.
DOI
[17]

Z. T. Shi, W. Yao, Z. P. Li, L. K. Zeng, Y. F. Zhao, R. F. Zhang, Y. Tang, and J. Y. Wen, “Artificial intelligence techniques for stability analysis and control in smart grids: Methodologies, applications, challenges and future directions,” Applied Energy, vol. 278, pp. 115733, Nov. 2020.

[18]

Z. M. Zhang, R. Yao, S. W. Huang, Y. Chen, S. W. Mei, and K. Sun, “An online search method for representative risky fault chains based on reinforcement learning and knowledge transfer,” IEEE Transactions on Power Systems, vol. 35, no. 3, pp. 1856–1867, May 2020.

[19]

X. S. Zhang, Q. Li, T. Yu, and B. Yang, “Consensus transfer Q-learning for decentralized generation command dispatch based on virtual generation tribe,” IEEE Transactions on Smart Grid, vol. 9, no. 3, pp. 2152–2165, May 2018.

[20]

C. Ren and Y. Xu, “Transfer learning-based power system online dynamic security assessment: Using one model to assess many unlearned faults,” IEEE Transactions on Power Systems, vol. 35, no. 1, pp. 821–824, Jan. 2020.

[21]

R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, Cambridge, MA: MIT Press, 1998.

DOI
[22]
R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, and I. Mordath, “Multi-agent actor-critic for mixed cooperative-competitive environments,” in Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017, pp. 6382–6393.
[23]

S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, Oct. 2010.

[24]

Margolis, Anna. “A literature review of domain adaptation with unlabeled data.” Tec. Report, pp. 1–42, 2011.

[25]
M. S. Long, J. M. Wang, G. G. Ding, J. G. Sun, and P. S. Yu, “Transfer feature learning with joint distribution adaptation,” in 2013 IEEE International Conference on Computer Vision, 2013, pp. 2200–2207.
DOI
[26]

J. Tahmoresnezhad and S. Hashemi, “Visual domain adaptation via transfer feature learning,” Knowledge and Information Systems, vol. 50, no. 2, pp 585–605, Feb. 2017.

[27]
J. D. Wang, Y. Q. Chen, S. J. Hao, W. J. Feng, and Z. Q. Shen, “Balanced distribution adaptation for transfer learning,” in 2017 IEEE International Conference on Data, 2017, pp. 1129–1134.
DOI
[28]

S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang, “Domain adaptation via transfer component analysis,” IEEE Transactions on Neural Networks, vol. 22, no. 2, pp. 199–210, Feb. 2011.

[29]

I. Kandel and M. Castelli, “How deeply to fine-tune a convolutional neural network: A case study using a histopathology dataset,” Applied Sciences, vol. 10, no. 10, pp. 3359, May 2020.

[30]
M. E. Taylor and P. Stone, “Cross-domain transfer for reinforcement learning,” in Proceedings of the 24th International Conference on Machine Learning, 2007, pp. 879–886.
DOI
[31]
V. Soni and S. Singh, “Using homomorphisms to transfer options across continuous reinforcement learning domains,” in Proceedings of the 21st National Conference on Artificial Intelligence, 2006, pp. 494–499.
Publication history
Copyright
Rights and permissions

Publication history

Received: 25 August 2021
Revised: 23 March 2022
Accepted: 13 April 2022
Published: 18 August 2022
Issue date: May 2023

Copyright

© 2021 CSEE.

Rights and permissions

This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Return