152
Views
4
Downloads
0
Crossref
3
WoS
2
Scopus
1
CSCD
In terms of model-free voltage control methods, when the device or topology of the system changes, the model’s accuracy often decreases, so an adaptive model is needed to coordinate the changes of input. To overcome the defects of a model-free control method, this paper proposes an automatic voltage control (AVC) method for differential power grids based on transfer learning and deep reinforcement learning. First, when constructing the Markov game of AVC, both the magnitude and number of voltage deviations are taken into account in the reward. Then, an AVC method based on constrained multi-agent deep reinforcement learning (DRL) is developed. To further improve learning efficiency, domain knowledge is used to reduce action space. Next, distribution adaptation transfer learning is introduced for the AVC transfer circumstance of systems with the same structure but distinct topological relations/parameters, which can perform well without any further training even if the structure changes. Moreover, for the AVC transfer circumstance of various power grids, parameter-based transfer learning is created, which enhances the target system’s training speed and effect. Finally, the method’s efficacy is tested using two IEEE systems and two real-world power grids.
In terms of model-free voltage control methods, when the device or topology of the system changes, the model’s accuracy often decreases, so an adaptive model is needed to coordinate the changes of input. To overcome the defects of a model-free control method, this paper proposes an automatic voltage control (AVC) method for differential power grids based on transfer learning and deep reinforcement learning. First, when constructing the Markov game of AVC, both the magnitude and number of voltage deviations are taken into account in the reward. Then, an AVC method based on constrained multi-agent deep reinforcement learning (DRL) is developed. To further improve learning efficiency, domain knowledge is used to reduce action space. Next, distribution adaptation transfer learning is introduced for the AVC transfer circumstance of systems with the same structure but distinct topological relations/parameters, which can perform well without any further training even if the structure changes. Moreover, for the AVC transfer circumstance of various power grids, parameter-based transfer learning is created, which enhances the target system’s training speed and effect. Finally, the method’s efficacy is tested using two IEEE systems and two real-world power grids.
C. Ren, Y. Xu, J. Zhao, R. Zhang and T. Wan, “A super-resolution perception-based incremental learning approach for power system voltage stability assessment with incomplete PMU measurements,” CSEE Journal of Power and Energy Systems, vol. 8, no. 1, pp. 76–85, Jan. 2022
X. Wang, J. Zhang, M. Zheng and L. Ma, “A distributed reactive power sharing approach in microgrids with improved droop control,” CSEE Journal of Power and Energy Systems, vol. 7, no. 6, pp. 1238–1246, Nov. 2021.
H. J. Li, F. X. Li, Y. Xu, D. T. Rizy, and S. Adhikari, “Autonomous and adaptive voltage control using multiple distributed energy resources,” IEEE Transactions on Power Systems, vol. 28, no. 2, pp. 718–730, May 2013.
H. Ahmadi, J. R. Martí, and H. W. Dommel, “A framework for Volt-VAR optimization in distribution systems,” IEEE Transactions on Smart Grid, vol. 6, no. 3, pp. 1473–1483, May 2015.
M. H. K. Tushar and C. Assi, “Volt-VAR control through joint optimization of capacitor bank switching, renewable energy, and home appliances,” IEEE Transactions on Smart Grid, vol. 9, no. 5, pp. 4077–4086, Sep. 2018.
M. B. Liu, C. A. Canizares, and W. Huang, “Reactive power and voltage control in distribution systems with limited switching operations,” IEEE Transactions on Power Systems, vol. 24, no. 2, pp. 889–899, May 2009.
Y. Xu, Z. Y. Dong, R. Zhang, and D. J. Hill, “Multi-timescale coordinated Voltage/VAR control of high renewable-penetrated distribution systems,” IEEE Transactions on Power Systems, vol. 32, no. 6, pp. 4398–4408, Nov. 2017.
Z. Y. Wang, J. H. Wang, B. K. Chen, M. M. Begovic, and Y. Y. He, “MPC-based Voltage/VAR optimization for distribution circuits with distributed generators and exponential load models,” IEEE Transactions on Smart Grid, vol. 5, no. 5, pp. 2412–2420, Sep. 2014.
H. J. Liu, W. Shi, and H. Zhu, “Distributed voltage control in distribution networks: Online and robust implementations,” IEEE Transactions on Smart Grid, vol. 9, no. 6, pp. 6106–6117, Nov. 2018.
L. Yu, D. Czarkowski, and F. de Leon, “Optimal distributed voltage regulation for secondary networks with DGs,” IEEE Transactions on Smart Grid, vol. 3, no. 2, pp. 959–967, Jun. 2012.
S. Y. Wang, J. J. Duan, D. Shi, C. L. Xu, H. F. Li, R. S. Diao, and Z. W. Wang, “A data-driven multi-agent autonomous voltage control framework using deep reinforcement learning,” IEEE Transactions on Power Systems, vol. 35, no. 6, pp. 4644–4654, Nov. 2020.
J. G. Vlachogiannis and N. D. Hatziargyriou, “Reinforcement learning for reactive power control,” IEEE Transactions on Power Systems, vol. 19, no. 3, pp. 1317–1325, Aug. 2004.
J. J. Duan, D. Shi, R. S. Diao, H. F. Li, Z. W. Wang, B. Zhang, D. S. Bian, and Z. H. Yi, “Deep-reinforcement-learning-based autonomous voltage control for power grid operations,” IEEE Transactions on Power Systems, vol. 35, no. 1, pp. 814–817, Jan. 2020.
W. Wang, N. P. Yu, Y. Q. Gao, and J. Shi, “Safe off-policy deep reinforcement learning algorithm for Volt-VAR control in power distribution systems,” IEEE Transactions on Smart Grid, vol. 11, no. 4, pp. 3008–3018, Jul. 2020.
Q. L. Yang, G. Wang, A. Sadeghi, G. B. Giannakis, and J. Sun, “Two-timescale voltage control in distribution grids using deep reinforcement learning,” IEEE Transactions on Smart Grid, vol. 11, no. 3, pp. 2313–2323, May 2020.
Z. T. Shi, W. Yao, Z. P. Li, L. K. Zeng, Y. F. Zhao, R. F. Zhang, Y. Tang, and J. Y. Wen, “Artificial intelligence techniques for stability analysis and control in smart grids: Methodologies, applications, challenges and future directions,” Applied Energy, vol. 278, pp. 115733, Nov. 2020.
Z. M. Zhang, R. Yao, S. W. Huang, Y. Chen, S. W. Mei, and K. Sun, “An online search method for representative risky fault chains based on reinforcement learning and knowledge transfer,” IEEE Transactions on Power Systems, vol. 35, no. 3, pp. 1856–1867, May 2020.
X. S. Zhang, Q. Li, T. Yu, and B. Yang, “Consensus transfer Q-learning for decentralized generation command dispatch based on virtual generation tribe,” IEEE Transactions on Smart Grid, vol. 9, no. 3, pp. 2152–2165, May 2018.
C. Ren and Y. Xu, “Transfer learning-based power system online dynamic security assessment: Using one model to assess many unlearned faults,” IEEE Transactions on Power Systems, vol. 35, no. 1, pp. 821–824, Jan. 2020.
R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, Cambridge, MA: MIT Press, 1998.
S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, Oct. 2010.
Margolis, Anna. “A literature review of domain adaptation with unlabeled data.” Tec. Report, pp. 1–42, 2011.
J. Tahmoresnezhad and S. Hashemi, “Visual domain adaptation via transfer feature learning,” Knowledge and Information Systems, vol. 50, no. 2, pp 585–605, Feb. 2017.
S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang, “Domain adaptation via transfer component analysis,” IEEE Transactions on Neural Networks, vol. 22, no. 2, pp. 199–210, Feb. 2011.
I. Kandel and M. Castelli, “How deeply to fine-tune a convolutional neural network: A case study using a histopathology dataset,” Applied Sciences, vol. 10, no. 10, pp. 3359, May 2020.
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).